00:00:00.001 Started by upstream project "autotest-per-patch" build number 126257 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.033 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.077 > git --version # 'git version 2.39.2' 00:00:00.077 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.116 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.116 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.224 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.235 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.247 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.247 > git config core.sparsecheckout # timeout=10 00:00:02.258 > git read-tree -mu HEAD # timeout=10 00:00:02.273 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.294 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.294 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.382 [Pipeline] Start of Pipeline 00:00:02.401 [Pipeline] library 00:00:02.403 Loading library shm_lib@master 00:00:02.403 Library shm_lib@master is cached. Copying from home. 00:00:02.437 [Pipeline] node 00:00:02.447 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.451 [Pipeline] { 00:00:02.463 [Pipeline] catchError 00:00:02.464 [Pipeline] { 00:00:02.474 [Pipeline] wrap 00:00:02.481 [Pipeline] { 00:00:02.487 [Pipeline] stage 00:00:02.488 [Pipeline] { (Prologue) 00:00:02.679 [Pipeline] sh 00:00:02.958 + logger -p user.info -t JENKINS-CI 00:00:02.977 [Pipeline] echo 00:00:02.979 Node: GP11 00:00:02.987 [Pipeline] sh 00:00:03.293 [Pipeline] setCustomBuildProperty 00:00:03.309 [Pipeline] echo 00:00:03.310 Cleanup processes 00:00:03.315 [Pipeline] sh 00:00:03.597 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.597 2071783 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.611 [Pipeline] sh 00:00:03.890 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.890 ++ grep -v 'sudo pgrep' 00:00:03.890 ++ awk '{print $1}' 00:00:03.890 + sudo kill -9 00:00:03.890 + true 00:00:03.903 [Pipeline] cleanWs 00:00:03.912 [WS-CLEANUP] Deleting project workspace... 00:00:03.913 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.920 [WS-CLEANUP] done 00:00:03.923 [Pipeline] setCustomBuildProperty 00:00:03.938 [Pipeline] sh 00:00:04.215 + sudo git config --global --replace-all safe.directory '*' 00:00:04.297 [Pipeline] httpRequest 00:00:04.320 [Pipeline] echo 00:00:04.322 Sorcerer 10.211.164.101 is alive 00:00:04.330 [Pipeline] httpRequest 00:00:04.341 HttpMethod: GET 00:00:04.341 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.342 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.344 Response Code: HTTP/1.1 200 OK 00:00:04.345 Success: Status code 200 is in the accepted range: 200,404 00:00:04.345 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.749 [Pipeline] sh 00:00:05.025 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.038 [Pipeline] httpRequest 00:00:05.052 [Pipeline] echo 00:00:05.054 Sorcerer 10.211.164.101 is alive 00:00:05.060 [Pipeline] httpRequest 00:00:05.065 HttpMethod: GET 00:00:05.065 URL: http://10.211.164.101/packages/spdk_47ca8c1aa0676b27d5fc96333e25500051b1e23d.tar.gz 00:00:05.066 Sending request to url: http://10.211.164.101/packages/spdk_47ca8c1aa0676b27d5fc96333e25500051b1e23d.tar.gz 00:00:05.068 Response Code: HTTP/1.1 200 OK 00:00:05.069 Success: Status code 200 is in the accepted range: 200,404 00:00:05.069 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_47ca8c1aa0676b27d5fc96333e25500051b1e23d.tar.gz 00:00:23.949 [Pipeline] sh 00:00:24.235 + tar --no-same-owner -xf spdk_47ca8c1aa0676b27d5fc96333e25500051b1e23d.tar.gz 00:00:27.534 [Pipeline] sh 00:00:27.820 + git -C spdk log --oneline -n5 00:00:27.820 47ca8c1aa nvme: populate socket_id for rdma controllers 00:00:27.820 c1860effd nvme: populate socket_id for tcp controllers 00:00:27.820 91f51bb85 nvme: populate socket_id for pcie controllers 00:00:27.821 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:00:27.821 b26ca8289 event: add enforce_numa app option 00:00:27.837 [Pipeline] } 00:00:27.858 [Pipeline] // stage 00:00:27.868 [Pipeline] stage 00:00:27.870 [Pipeline] { (Prepare) 00:00:27.892 [Pipeline] writeFile 00:00:27.912 [Pipeline] sh 00:00:28.196 + logger -p user.info -t JENKINS-CI 00:00:28.209 [Pipeline] sh 00:00:28.493 + logger -p user.info -t JENKINS-CI 00:00:28.508 [Pipeline] sh 00:00:28.819 + cat autorun-spdk.conf 00:00:28.819 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.819 SPDK_TEST_NVMF=1 00:00:28.819 SPDK_TEST_NVME_CLI=1 00:00:28.819 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.819 SPDK_TEST_NVMF_NICS=e810 00:00:28.819 SPDK_TEST_VFIOUSER=1 00:00:28.819 SPDK_RUN_UBSAN=1 00:00:28.819 NET_TYPE=phy 00:00:28.826 RUN_NIGHTLY=0 00:00:28.830 [Pipeline] readFile 00:00:28.859 [Pipeline] withEnv 00:00:28.861 [Pipeline] { 00:00:28.876 [Pipeline] sh 00:00:29.160 + set -ex 00:00:29.160 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:29.160 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:29.160 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.160 ++ SPDK_TEST_NVMF=1 00:00:29.160 ++ SPDK_TEST_NVME_CLI=1 00:00:29.160 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.160 ++ SPDK_TEST_NVMF_NICS=e810 00:00:29.160 ++ SPDK_TEST_VFIOUSER=1 00:00:29.160 ++ SPDK_RUN_UBSAN=1 00:00:29.160 ++ NET_TYPE=phy 00:00:29.160 ++ RUN_NIGHTLY=0 00:00:29.160 + case $SPDK_TEST_NVMF_NICS in 00:00:29.160 + DRIVERS=ice 00:00:29.160 + [[ tcp == \r\d\m\a ]] 00:00:29.160 + [[ -n ice ]] 00:00:29.160 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:29.160 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:32.454 rmmod: ERROR: Module irdma is not currently loaded 00:00:32.454 rmmod: ERROR: Module i40iw is not currently loaded 00:00:32.454 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:32.454 + true 00:00:32.454 + for D in $DRIVERS 00:00:32.454 + sudo modprobe ice 00:00:32.454 + exit 0 00:00:32.463 [Pipeline] } 00:00:32.503 [Pipeline] // withEnv 00:00:32.509 [Pipeline] } 00:00:32.530 [Pipeline] // stage 00:00:32.540 [Pipeline] catchError 00:00:32.541 [Pipeline] { 00:00:32.556 [Pipeline] timeout 00:00:32.557 Timeout set to expire in 50 min 00:00:32.558 [Pipeline] { 00:00:32.574 [Pipeline] stage 00:00:32.577 [Pipeline] { (Tests) 00:00:32.593 [Pipeline] sh 00:00:32.880 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.880 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.880 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.880 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:32.880 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.880 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.880 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:32.880 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.880 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.880 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.880 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:32.880 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.880 + source /etc/os-release 00:00:32.880 ++ NAME='Fedora Linux' 00:00:32.880 ++ VERSION='38 (Cloud Edition)' 00:00:32.880 ++ ID=fedora 00:00:32.880 ++ VERSION_ID=38 00:00:32.880 ++ VERSION_CODENAME= 00:00:32.880 ++ PLATFORM_ID=platform:f38 00:00:32.880 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:32.880 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.880 ++ LOGO=fedora-logo-icon 00:00:32.880 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:32.880 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.880 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:32.880 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.880 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.880 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.880 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:32.880 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.880 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:32.880 ++ SUPPORT_END=2024-05-14 00:00:32.880 ++ VARIANT='Cloud Edition' 00:00:32.880 ++ VARIANT_ID=cloud 00:00:32.880 + uname -a 00:00:32.880 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:32.880 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:33.828 Hugepages 00:00:33.828 node hugesize free / total 00:00:33.828 node0 1048576kB 0 / 0 00:00:33.828 node0 2048kB 0 / 0 00:00:33.828 node1 1048576kB 0 / 0 00:00:33.828 node1 2048kB 0 / 0 00:00:33.828 00:00:33.828 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.828 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:33.828 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:33.828 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:33.828 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:33.828 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:33.828 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:33.828 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:33.828 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:33.828 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:33.828 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:33.828 + rm -f /tmp/spdk-ld-path 00:00:33.828 + source autorun-spdk.conf 00:00:33.828 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.828 ++ SPDK_TEST_NVMF=1 00:00:33.828 ++ SPDK_TEST_NVME_CLI=1 00:00:33.828 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.828 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.828 ++ SPDK_TEST_VFIOUSER=1 00:00:33.828 ++ SPDK_RUN_UBSAN=1 00:00:33.828 ++ NET_TYPE=phy 00:00:33.828 ++ RUN_NIGHTLY=0 00:00:33.828 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.828 + [[ -n '' ]] 00:00:33.828 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.828 + for M in /var/spdk/build-*-manifest.txt 00:00:33.828 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.828 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.828 + for M in /var/spdk/build-*-manifest.txt 00:00:33.828 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.828 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.828 ++ uname 00:00:33.828 + [[ Linux == \L\i\n\u\x ]] 00:00:33.828 + sudo dmesg -T 00:00:33.828 + sudo dmesg --clear 00:00:33.828 + dmesg_pid=2072466 00:00:33.828 + [[ Fedora Linux == FreeBSD ]] 00:00:33.828 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.828 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.828 + sudo dmesg -Tw 00:00:33.828 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.828 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.828 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.828 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.828 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.828 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.828 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.828 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.828 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.828 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.828 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.828 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.828 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.828 Test configuration: 00:00:34.086 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.086 SPDK_TEST_NVMF=1 00:00:34.086 SPDK_TEST_NVME_CLI=1 00:00:34.086 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.086 SPDK_TEST_NVMF_NICS=e810 00:00:34.086 SPDK_TEST_VFIOUSER=1 00:00:34.086 SPDK_RUN_UBSAN=1 00:00:34.086 NET_TYPE=phy 00:00:34.086 RUN_NIGHTLY=0 00:04:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:34.086 00:04:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:34.086 00:04:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:34.087 00:04:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:34.087 00:04:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.087 00:04:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.087 00:04:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.087 00:04:25 -- paths/export.sh@5 -- $ export PATH 00:00:34.087 00:04:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.087 00:04:25 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:34.087 00:04:25 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:34.087 00:04:25 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721081065.XXXXXX 00:00:34.087 00:04:25 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721081065.qrYAaw 00:00:34.087 00:04:25 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:34.087 00:04:25 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:34.087 00:04:25 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:34.087 00:04:25 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:34.087 00:04:25 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:34.087 00:04:25 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:34.087 00:04:25 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:34.087 00:04:25 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.087 00:04:25 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:34.087 00:04:25 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:34.087 00:04:25 -- pm/common@17 -- $ local monitor 00:00:34.087 00:04:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.087 00:04:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.087 00:04:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.087 00:04:25 -- pm/common@21 -- $ date +%s 00:00:34.087 00:04:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.087 00:04:25 -- pm/common@21 -- $ date +%s 00:00:34.087 00:04:25 -- pm/common@25 -- $ sleep 1 00:00:34.087 00:04:25 -- pm/common@21 -- $ date +%s 00:00:34.087 00:04:25 -- pm/common@21 -- $ date +%s 00:00:34.087 00:04:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081065 00:00:34.087 00:04:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081065 00:00:34.087 00:04:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081065 00:00:34.087 00:04:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081065 00:00:34.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081065_collect-vmstat.pm.log 00:00:34.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081065_collect-cpu-load.pm.log 00:00:34.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081065_collect-cpu-temp.pm.log 00:00:34.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081065_collect-bmc-pm.bmc.pm.log 00:00:35.025 00:04:26 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:35.025 00:04:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:35.025 00:04:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:35.025 00:04:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:35.025 00:04:26 -- spdk/autobuild.sh@16 -- $ date -u 00:00:35.025 Mon Jul 15 10:04:26 PM UTC 2024 00:00:35.025 00:04:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:35.025 v24.09-pre-233-g47ca8c1aa 00:00:35.025 00:04:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:35.025 00:04:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:35.025 00:04:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:35.025 00:04:26 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:35.025 00:04:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:35.025 00:04:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.025 ************************************ 00:00:35.025 START TEST ubsan 00:00:35.025 ************************************ 00:00:35.025 00:04:26 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:35.025 using ubsan 00:00:35.025 00:00:35.025 real 0m0.000s 00:00:35.025 user 0m0.000s 00:00:35.025 sys 0m0.000s 00:00:35.025 00:04:26 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:35.025 00:04:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:35.025 ************************************ 00:00:35.025 END TEST ubsan 00:00:35.025 ************************************ 00:00:35.026 00:04:26 -- common/autotest_common.sh@1142 -- $ return 0 00:00:35.026 00:04:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:35.026 00:04:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:35.026 00:04:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:35.026 00:04:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:35.026 00:04:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:35.026 00:04:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:35.026 00:04:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:35.026 00:04:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:35.026 00:04:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:35.284 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:35.284 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:35.544 Using 'verbs' RDMA provider 00:00:46.150 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:56.132 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:56.132 Creating mk/config.mk...done. 00:00:56.132 Creating mk/cc.flags.mk...done. 00:00:56.132 Type 'make' to build. 00:00:56.132 00:04:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:56.132 00:04:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:56.132 00:04:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:56.132 00:04:47 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.132 ************************************ 00:00:56.132 START TEST make 00:00:56.132 ************************************ 00:00:56.132 00:04:47 make -- common/autotest_common.sh@1123 -- $ make -j48 00:00:56.132 make[1]: Nothing to be done for 'all'. 00:00:57.518 The Meson build system 00:00:57.518 Version: 1.3.1 00:00:57.518 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:57.518 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:57.518 Build type: native build 00:00:57.518 Project name: libvfio-user 00:00:57.518 Project version: 0.0.1 00:00:57.518 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:57.518 C linker for the host machine: cc ld.bfd 2.39-16 00:00:57.518 Host machine cpu family: x86_64 00:00:57.518 Host machine cpu: x86_64 00:00:57.518 Run-time dependency threads found: YES 00:00:57.518 Library dl found: YES 00:00:57.518 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:57.518 Run-time dependency json-c found: YES 0.17 00:00:57.518 Run-time dependency cmocka found: YES 1.1.7 00:00:57.518 Program pytest-3 found: NO 00:00:57.518 Program flake8 found: NO 00:00:57.518 Program misspell-fixer found: NO 00:00:57.518 Program restructuredtext-lint found: NO 00:00:57.518 Program valgrind found: YES (/usr/bin/valgrind) 00:00:57.518 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:57.518 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:57.518 Compiler for C supports arguments -Wwrite-strings: YES 00:00:57.518 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.518 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:57.518 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:57.518 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.518 Build targets in project: 8 00:00:57.518 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:57.518 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:57.518 00:00:57.518 libvfio-user 0.0.1 00:00:57.518 00:00:57.518 User defined options 00:00:57.518 buildtype : debug 00:00:57.518 default_library: shared 00:00:57.518 libdir : /usr/local/lib 00:00:57.518 00:00:57.518 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:58.091 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:58.355 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:58.355 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:58.355 [3/37] Compiling C object samples/null.p/null.c.o 00:00:58.355 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:58.355 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:58.355 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:58.355 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:58.355 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:58.355 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:58.355 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:58.615 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:58.615 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:58.615 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:58.615 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:58.615 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:58.615 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:58.615 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:58.615 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:58.615 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:58.615 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:58.615 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:58.615 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:58.615 [23/37] Compiling C object samples/client.p/client.c.o 00:00:58.615 [24/37] Compiling C object samples/server.p/server.c.o 00:00:58.615 [25/37] Linking target samples/client 00:00:58.615 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:58.615 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:58.615 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:58.877 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:58.877 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:58.877 [31/37] Linking target test/unit_tests 00:00:58.877 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:59.138 [33/37] Linking target samples/server 00:00:59.138 [34/37] Linking target samples/null 00:00:59.138 [35/37] Linking target samples/lspci 00:00:59.138 [36/37] Linking target samples/shadow_ioeventfd_server 00:00:59.138 [37/37] Linking target samples/gpio-pci-idio-16 00:00:59.138 INFO: autodetecting backend as ninja 00:00:59.138 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:59.138 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.102 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:00.102 ninja: no work to do. 00:01:04.285 The Meson build system 00:01:04.285 Version: 1.3.1 00:01:04.285 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:04.285 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:04.285 Build type: native build 00:01:04.285 Program cat found: YES (/usr/bin/cat) 00:01:04.285 Project name: DPDK 00:01:04.285 Project version: 24.03.0 00:01:04.285 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:04.285 C linker for the host machine: cc ld.bfd 2.39-16 00:01:04.285 Host machine cpu family: x86_64 00:01:04.285 Host machine cpu: x86_64 00:01:04.285 Message: ## Building in Developer Mode ## 00:01:04.285 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:04.285 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:04.285 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:04.285 Program python3 found: YES (/usr/bin/python3) 00:01:04.285 Program cat found: YES (/usr/bin/cat) 00:01:04.285 Compiler for C supports arguments -march=native: YES 00:01:04.285 Checking for size of "void *" : 8 00:01:04.285 Checking for size of "void *" : 8 (cached) 00:01:04.285 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:04.285 Library m found: YES 00:01:04.285 Library numa found: YES 00:01:04.285 Has header "numaif.h" : YES 00:01:04.285 Library fdt found: NO 00:01:04.285 Library execinfo found: NO 00:01:04.285 Has header "execinfo.h" : YES 00:01:04.285 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:04.285 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:04.285 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:04.285 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:04.285 Run-time dependency openssl found: YES 3.0.9 00:01:04.285 Run-time dependency libpcap found: YES 1.10.4 00:01:04.285 Has header "pcap.h" with dependency libpcap: YES 00:01:04.285 Compiler for C supports arguments -Wcast-qual: YES 00:01:04.285 Compiler for C supports arguments -Wdeprecated: YES 00:01:04.285 Compiler for C supports arguments -Wformat: YES 00:01:04.285 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:04.285 Compiler for C supports arguments -Wformat-security: NO 00:01:04.285 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:04.285 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:04.285 Compiler for C supports arguments -Wnested-externs: YES 00:01:04.285 Compiler for C supports arguments -Wold-style-definition: YES 00:01:04.285 Compiler for C supports arguments -Wpointer-arith: YES 00:01:04.285 Compiler for C supports arguments -Wsign-compare: YES 00:01:04.285 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:04.285 Compiler for C supports arguments -Wundef: YES 00:01:04.285 Compiler for C supports arguments -Wwrite-strings: YES 00:01:04.285 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:04.285 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:04.285 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:04.285 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:04.285 Program objdump found: YES (/usr/bin/objdump) 00:01:04.285 Compiler for C supports arguments -mavx512f: YES 00:01:04.285 Checking if "AVX512 checking" compiles: YES 00:01:04.285 Fetching value of define "__SSE4_2__" : 1 00:01:04.285 Fetching value of define "__AES__" : 1 00:01:04.285 Fetching value of define "__AVX__" : 1 00:01:04.285 Fetching value of define "__AVX2__" : (undefined) 00:01:04.285 Fetching value of define "__AVX512BW__" : (undefined) 00:01:04.285 Fetching value of define "__AVX512CD__" : (undefined) 00:01:04.285 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:04.285 Fetching value of define "__AVX512F__" : (undefined) 00:01:04.285 Fetching value of define "__AVX512VL__" : (undefined) 00:01:04.285 Fetching value of define "__PCLMUL__" : 1 00:01:04.285 Fetching value of define "__RDRND__" : 1 00:01:04.285 Fetching value of define "__RDSEED__" : (undefined) 00:01:04.285 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:04.285 Fetching value of define "__znver1__" : (undefined) 00:01:04.285 Fetching value of define "__znver2__" : (undefined) 00:01:04.285 Fetching value of define "__znver3__" : (undefined) 00:01:04.285 Fetching value of define "__znver4__" : (undefined) 00:01:04.285 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:04.285 Message: lib/log: Defining dependency "log" 00:01:04.285 Message: lib/kvargs: Defining dependency "kvargs" 00:01:04.285 Message: lib/telemetry: Defining dependency "telemetry" 00:01:04.285 Checking for function "getentropy" : NO 00:01:04.285 Message: lib/eal: Defining dependency "eal" 00:01:04.285 Message: lib/ring: Defining dependency "ring" 00:01:04.285 Message: lib/rcu: Defining dependency "rcu" 00:01:04.285 Message: lib/mempool: Defining dependency "mempool" 00:01:04.285 Message: lib/mbuf: Defining dependency "mbuf" 00:01:04.285 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:04.286 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:04.286 Compiler for C supports arguments -mpclmul: YES 00:01:04.286 Compiler for C supports arguments -maes: YES 00:01:04.286 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:04.286 Compiler for C supports arguments -mavx512bw: YES 00:01:04.286 Compiler for C supports arguments -mavx512dq: YES 00:01:04.286 Compiler for C supports arguments -mavx512vl: YES 00:01:04.286 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:04.286 Compiler for C supports arguments -mavx2: YES 00:01:04.286 Compiler for C supports arguments -mavx: YES 00:01:04.286 Message: lib/net: Defining dependency "net" 00:01:04.286 Message: lib/meter: Defining dependency "meter" 00:01:04.286 Message: lib/ethdev: Defining dependency "ethdev" 00:01:04.286 Message: lib/pci: Defining dependency "pci" 00:01:04.286 Message: lib/cmdline: Defining dependency "cmdline" 00:01:04.286 Message: lib/hash: Defining dependency "hash" 00:01:04.286 Message: lib/timer: Defining dependency "timer" 00:01:04.286 Message: lib/compressdev: Defining dependency "compressdev" 00:01:04.286 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:04.286 Message: lib/dmadev: Defining dependency "dmadev" 00:01:04.286 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:04.286 Message: lib/power: Defining dependency "power" 00:01:04.286 Message: lib/reorder: Defining dependency "reorder" 00:01:04.286 Message: lib/security: Defining dependency "security" 00:01:04.286 Has header "linux/userfaultfd.h" : YES 00:01:04.286 Has header "linux/vduse.h" : YES 00:01:04.286 Message: lib/vhost: Defining dependency "vhost" 00:01:04.286 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:04.286 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:04.286 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:04.286 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:04.286 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:04.286 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:04.286 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:04.286 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:04.286 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:04.286 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:04.286 Program doxygen found: YES (/usr/bin/doxygen) 00:01:04.286 Configuring doxy-api-html.conf using configuration 00:01:04.286 Configuring doxy-api-man.conf using configuration 00:01:04.286 Program mandb found: YES (/usr/bin/mandb) 00:01:04.286 Program sphinx-build found: NO 00:01:04.286 Configuring rte_build_config.h using configuration 00:01:04.286 Message: 00:01:04.286 ================= 00:01:04.286 Applications Enabled 00:01:04.286 ================= 00:01:04.286 00:01:04.286 apps: 00:01:04.286 00:01:04.308 00:01:04.308 Message: 00:01:04.308 ================= 00:01:04.308 Libraries Enabled 00:01:04.308 ================= 00:01:04.308 00:01:04.308 libs: 00:01:04.308 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:04.308 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:04.308 cryptodev, dmadev, power, reorder, security, vhost, 00:01:04.308 00:01:04.308 Message: 00:01:04.308 =============== 00:01:04.308 Drivers Enabled 00:01:04.308 =============== 00:01:04.308 00:01:04.308 common: 00:01:04.308 00:01:04.308 bus: 00:01:04.308 pci, vdev, 00:01:04.308 mempool: 00:01:04.308 ring, 00:01:04.308 dma: 00:01:04.308 00:01:04.308 net: 00:01:04.308 00:01:04.308 crypto: 00:01:04.308 00:01:04.308 compress: 00:01:04.308 00:01:04.308 vdpa: 00:01:04.308 00:01:04.308 00:01:04.308 Message: 00:01:04.308 ================= 00:01:04.308 Content Skipped 00:01:04.308 ================= 00:01:04.308 00:01:04.308 apps: 00:01:04.308 dumpcap: explicitly disabled via build config 00:01:04.308 graph: explicitly disabled via build config 00:01:04.308 pdump: explicitly disabled via build config 00:01:04.308 proc-info: explicitly disabled via build config 00:01:04.308 test-acl: explicitly disabled via build config 00:01:04.308 test-bbdev: explicitly disabled via build config 00:01:04.308 test-cmdline: explicitly disabled via build config 00:01:04.308 test-compress-perf: explicitly disabled via build config 00:01:04.308 test-crypto-perf: explicitly disabled via build config 00:01:04.308 test-dma-perf: explicitly disabled via build config 00:01:04.308 test-eventdev: explicitly disabled via build config 00:01:04.308 test-fib: explicitly disabled via build config 00:01:04.308 test-flow-perf: explicitly disabled via build config 00:01:04.308 test-gpudev: explicitly disabled via build config 00:01:04.308 test-mldev: explicitly disabled via build config 00:01:04.308 test-pipeline: explicitly disabled via build config 00:01:04.308 test-pmd: explicitly disabled via build config 00:01:04.308 test-regex: explicitly disabled via build config 00:01:04.308 test-sad: explicitly disabled via build config 00:01:04.308 test-security-perf: explicitly disabled via build config 00:01:04.308 00:01:04.308 libs: 00:01:04.308 argparse: explicitly disabled via build config 00:01:04.308 metrics: explicitly disabled via build config 00:01:04.308 acl: explicitly disabled via build config 00:01:04.308 bbdev: explicitly disabled via build config 00:01:04.308 bitratestats: explicitly disabled via build config 00:01:04.308 bpf: explicitly disabled via build config 00:01:04.308 cfgfile: explicitly disabled via build config 00:01:04.308 distributor: explicitly disabled via build config 00:01:04.308 efd: explicitly disabled via build config 00:01:04.308 eventdev: explicitly disabled via build config 00:01:04.308 dispatcher: explicitly disabled via build config 00:01:04.308 gpudev: explicitly disabled via build config 00:01:04.308 gro: explicitly disabled via build config 00:01:04.308 gso: explicitly disabled via build config 00:01:04.308 ip_frag: explicitly disabled via build config 00:01:04.308 jobstats: explicitly disabled via build config 00:01:04.308 latencystats: explicitly disabled via build config 00:01:04.308 lpm: explicitly disabled via build config 00:01:04.308 member: explicitly disabled via build config 00:01:04.308 pcapng: explicitly disabled via build config 00:01:04.308 rawdev: explicitly disabled via build config 00:01:04.308 regexdev: explicitly disabled via build config 00:01:04.308 mldev: explicitly disabled via build config 00:01:04.308 rib: explicitly disabled via build config 00:01:04.308 sched: explicitly disabled via build config 00:01:04.308 stack: explicitly disabled via build config 00:01:04.308 ipsec: explicitly disabled via build config 00:01:04.308 pdcp: explicitly disabled via build config 00:01:04.308 fib: explicitly disabled via build config 00:01:04.308 port: explicitly disabled via build config 00:01:04.308 pdump: explicitly disabled via build config 00:01:04.308 table: explicitly disabled via build config 00:01:04.308 pipeline: explicitly disabled via build config 00:01:04.308 graph: explicitly disabled via build config 00:01:04.308 node: explicitly disabled via build config 00:01:04.308 00:01:04.308 drivers: 00:01:04.308 common/cpt: not in enabled drivers build config 00:01:04.308 common/dpaax: not in enabled drivers build config 00:01:04.308 common/iavf: not in enabled drivers build config 00:01:04.308 common/idpf: not in enabled drivers build config 00:01:04.308 common/ionic: not in enabled drivers build config 00:01:04.308 common/mvep: not in enabled drivers build config 00:01:04.308 common/octeontx: not in enabled drivers build config 00:01:04.308 bus/auxiliary: not in enabled drivers build config 00:01:04.308 bus/cdx: not in enabled drivers build config 00:01:04.308 bus/dpaa: not in enabled drivers build config 00:01:04.308 bus/fslmc: not in enabled drivers build config 00:01:04.308 bus/ifpga: not in enabled drivers build config 00:01:04.308 bus/platform: not in enabled drivers build config 00:01:04.308 bus/uacce: not in enabled drivers build config 00:01:04.308 bus/vmbus: not in enabled drivers build config 00:01:04.308 common/cnxk: not in enabled drivers build config 00:01:04.308 common/mlx5: not in enabled drivers build config 00:01:04.308 common/nfp: not in enabled drivers build config 00:01:04.308 common/nitrox: not in enabled drivers build config 00:01:04.308 common/qat: not in enabled drivers build config 00:01:04.308 common/sfc_efx: not in enabled drivers build config 00:01:04.308 mempool/bucket: not in enabled drivers build config 00:01:04.308 mempool/cnxk: not in enabled drivers build config 00:01:04.308 mempool/dpaa: not in enabled drivers build config 00:01:04.308 mempool/dpaa2: not in enabled drivers build config 00:01:04.308 mempool/octeontx: not in enabled drivers build config 00:01:04.308 mempool/stack: not in enabled drivers build config 00:01:04.308 dma/cnxk: not in enabled drivers build config 00:01:04.308 dma/dpaa: not in enabled drivers build config 00:01:04.308 dma/dpaa2: not in enabled drivers build config 00:01:04.308 dma/hisilicon: not in enabled drivers build config 00:01:04.308 dma/idxd: not in enabled drivers build config 00:01:04.308 dma/ioat: not in enabled drivers build config 00:01:04.308 dma/skeleton: not in enabled drivers build config 00:01:04.308 net/af_packet: not in enabled drivers build config 00:01:04.308 net/af_xdp: not in enabled drivers build config 00:01:04.308 net/ark: not in enabled drivers build config 00:01:04.308 net/atlantic: not in enabled drivers build config 00:01:04.308 net/avp: not in enabled drivers build config 00:01:04.308 net/axgbe: not in enabled drivers build config 00:01:04.308 net/bnx2x: not in enabled drivers build config 00:01:04.308 net/bnxt: not in enabled drivers build config 00:01:04.308 net/bonding: not in enabled drivers build config 00:01:04.308 net/cnxk: not in enabled drivers build config 00:01:04.308 net/cpfl: not in enabled drivers build config 00:01:04.308 net/cxgbe: not in enabled drivers build config 00:01:04.308 net/dpaa: not in enabled drivers build config 00:01:04.308 net/dpaa2: not in enabled drivers build config 00:01:04.308 net/e1000: not in enabled drivers build config 00:01:04.308 net/ena: not in enabled drivers build config 00:01:04.308 net/enetc: not in enabled drivers build config 00:01:04.308 net/enetfec: not in enabled drivers build config 00:01:04.308 net/enic: not in enabled drivers build config 00:01:04.308 net/failsafe: not in enabled drivers build config 00:01:04.308 net/fm10k: not in enabled drivers build config 00:01:04.308 net/gve: not in enabled drivers build config 00:01:04.308 net/hinic: not in enabled drivers build config 00:01:04.308 net/hns3: not in enabled drivers build config 00:01:04.308 net/i40e: not in enabled drivers build config 00:01:04.308 net/iavf: not in enabled drivers build config 00:01:04.308 net/ice: not in enabled drivers build config 00:01:04.308 net/idpf: not in enabled drivers build config 00:01:04.308 net/igc: not in enabled drivers build config 00:01:04.308 net/ionic: not in enabled drivers build config 00:01:04.308 net/ipn3ke: not in enabled drivers build config 00:01:04.308 net/ixgbe: not in enabled drivers build config 00:01:04.308 net/mana: not in enabled drivers build config 00:01:04.308 net/memif: not in enabled drivers build config 00:01:04.308 net/mlx4: not in enabled drivers build config 00:01:04.308 net/mlx5: not in enabled drivers build config 00:01:04.308 net/mvneta: not in enabled drivers build config 00:01:04.308 net/mvpp2: not in enabled drivers build config 00:01:04.308 net/netvsc: not in enabled drivers build config 00:01:04.308 net/nfb: not in enabled drivers build config 00:01:04.308 net/nfp: not in enabled drivers build config 00:01:04.308 net/ngbe: not in enabled drivers build config 00:01:04.308 net/null: not in enabled drivers build config 00:01:04.308 net/octeontx: not in enabled drivers build config 00:01:04.308 net/octeon_ep: not in enabled drivers build config 00:01:04.308 net/pcap: not in enabled drivers build config 00:01:04.308 net/pfe: not in enabled drivers build config 00:01:04.308 net/qede: not in enabled drivers build config 00:01:04.308 net/ring: not in enabled drivers build config 00:01:04.308 net/sfc: not in enabled drivers build config 00:01:04.308 net/softnic: not in enabled drivers build config 00:01:04.308 net/tap: not in enabled drivers build config 00:01:04.308 net/thunderx: not in enabled drivers build config 00:01:04.308 net/txgbe: not in enabled drivers build config 00:01:04.308 net/vdev_netvsc: not in enabled drivers build config 00:01:04.308 net/vhost: not in enabled drivers build config 00:01:04.308 net/virtio: not in enabled drivers build config 00:01:04.308 net/vmxnet3: not in enabled drivers build config 00:01:04.308 raw/*: missing internal dependency, "rawdev" 00:01:04.308 crypto/armv8: not in enabled drivers build config 00:01:04.308 crypto/bcmfs: not in enabled drivers build config 00:01:04.308 crypto/caam_jr: not in enabled drivers build config 00:01:04.308 crypto/ccp: not in enabled drivers build config 00:01:04.308 crypto/cnxk: not in enabled drivers build config 00:01:04.308 crypto/dpaa_sec: not in enabled drivers build config 00:01:04.308 crypto/dpaa2_sec: not in enabled drivers build config 00:01:04.308 crypto/ipsec_mb: not in enabled drivers build config 00:01:04.308 crypto/mlx5: not in enabled drivers build config 00:01:04.308 crypto/mvsam: not in enabled drivers build config 00:01:04.308 crypto/nitrox: not in enabled drivers build config 00:01:04.308 crypto/null: not in enabled drivers build config 00:01:04.309 crypto/octeontx: not in enabled drivers build config 00:01:04.309 crypto/openssl: not in enabled drivers build config 00:01:04.309 crypto/scheduler: not in enabled drivers build config 00:01:04.309 crypto/uadk: not in enabled drivers build config 00:01:04.309 crypto/virtio: not in enabled drivers build config 00:01:04.309 compress/isal: not in enabled drivers build config 00:01:04.309 compress/mlx5: not in enabled drivers build config 00:01:04.309 compress/nitrox: not in enabled drivers build config 00:01:04.309 compress/octeontx: not in enabled drivers build config 00:01:04.309 compress/zlib: not in enabled drivers build config 00:01:04.309 regex/*: missing internal dependency, "regexdev" 00:01:04.309 ml/*: missing internal dependency, "mldev" 00:01:04.309 vdpa/ifc: not in enabled drivers build config 00:01:04.309 vdpa/mlx5: not in enabled drivers build config 00:01:04.309 vdpa/nfp: not in enabled drivers build config 00:01:04.309 vdpa/sfc: not in enabled drivers build config 00:01:04.309 event/*: missing internal dependency, "eventdev" 00:01:04.309 baseband/*: missing internal dependency, "bbdev" 00:01:04.309 gpu/*: missing internal dependency, "gpudev" 00:01:04.309 00:01:04.309 00:01:04.567 Build targets in project: 85 00:01:04.567 00:01:04.567 DPDK 24.03.0 00:01:04.567 00:01:04.567 User defined options 00:01:04.567 buildtype : debug 00:01:04.567 default_library : shared 00:01:04.567 libdir : lib 00:01:04.567 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:04.567 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:04.567 c_link_args : 00:01:04.567 cpu_instruction_set: native 00:01:04.567 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:04.567 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:04.567 enable_docs : false 00:01:04.567 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:04.567 enable_kmods : false 00:01:04.567 max_lcores : 128 00:01:04.567 tests : false 00:01:04.567 00:01:04.567 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:05.137 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:05.137 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:05.137 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:05.137 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:05.137 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:05.137 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:05.137 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:05.137 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:05.137 [8/268] Linking static target lib/librte_kvargs.a 00:01:05.137 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:05.137 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:05.137 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:05.137 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:05.137 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:05.137 [14/268] Linking static target lib/librte_log.a 00:01:05.397 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:05.397 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:06.021 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.021 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:06.021 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:06.021 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:06.021 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:06.021 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:06.021 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:06.021 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:06.021 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:06.021 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:06.021 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:06.021 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:06.021 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:06.021 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:06.021 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:06.021 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:06.021 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:06.021 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:06.021 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:06.021 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:06.021 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:06.021 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:06.021 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:06.021 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:06.021 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:06.021 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:06.021 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:06.021 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:06.021 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:06.021 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:06.021 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:06.021 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:06.021 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:06.021 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:06.021 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:06.021 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:06.294 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:06.294 [54/268] Linking static target lib/librte_telemetry.a 00:01:06.294 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:06.294 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:06.294 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:06.294 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:06.294 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:06.294 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:06.294 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:06.294 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:06.294 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:06.555 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.555 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.555 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:06.555 [67/268] Linking target lib/librte_log.so.24.1 00:01:06.555 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:06.555 [69/268] Linking static target lib/librte_pci.a 00:01:06.813 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:06.813 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:06.813 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:06.813 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:06.813 [74/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:06.813 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:06.813 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:06.813 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:06.813 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:06.813 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:07.077 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:07.077 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:07.077 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:07.077 [83/268] Linking target lib/librte_kvargs.so.24.1 00:01:07.077 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:07.077 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:07.077 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:07.077 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:07.077 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:07.077 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:07.077 [90/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:07.077 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:07.077 [92/268] Linking static target lib/librte_meter.a 00:01:07.077 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:07.077 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:07.077 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:07.077 [96/268] Linking static target lib/librte_ring.a 00:01:07.077 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:07.077 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:07.077 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:07.077 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:07.077 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:07.077 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:07.077 [103/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.077 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:07.077 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:07.077 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:07.077 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:07.077 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:07.340 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.341 [110/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.341 [111/268] Linking static target lib/librte_eal.a 00:01:07.341 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:07.341 [113/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:07.341 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:07.341 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:07.341 [116/268] Linking static target lib/librte_mempool.a 00:01:07.341 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:07.341 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:07.341 [119/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:07.341 [120/268] Linking target lib/librte_telemetry.so.24.1 00:01:07.341 [121/268] Linking static target lib/librte_rcu.a 00:01:07.341 [122/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:07.341 [123/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:07.341 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.341 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.599 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.599 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:07.599 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.599 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.599 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:07.599 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.599 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:07.599 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:07.599 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.599 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.599 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.599 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.599 [138/268] Linking static target lib/librte_net.a 00:01:07.861 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:07.861 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:07.861 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:07.861 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:07.861 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.861 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.861 [145/268] Linking static target lib/librte_cmdline.a 00:01:07.861 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:08.121 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:08.121 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:08.121 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.121 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:08.121 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:08.121 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:08.121 [153/268] Linking static target lib/librte_timer.a 00:01:08.121 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:08.121 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:08.121 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:08.121 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.121 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:08.121 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:08.379 [160/268] Linking static target lib/librte_dmadev.a 00:01:08.379 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:08.379 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:08.379 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:08.379 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:08.379 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.379 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.379 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.379 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.379 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.379 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.379 [171/268] Linking static target lib/librte_power.a 00:01:08.379 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.379 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.638 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.638 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.638 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.638 [177/268] Linking static target lib/librte_compressdev.a 00:01:08.638 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.638 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.638 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.638 [181/268] Linking static target lib/librte_hash.a 00:01:08.638 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.638 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.638 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.638 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.638 [186/268] Linking static target lib/librte_reorder.a 00:01:08.638 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.896 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.896 [189/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.896 [190/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.896 [191/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.896 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.896 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.896 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.896 [195/268] Linking static target lib/librte_mbuf.a 00:01:08.896 [196/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.896 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.896 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.896 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.896 [200/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.155 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:09.155 [202/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.155 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:09.155 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:09.155 [205/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.155 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:09.155 [207/268] Linking static target drivers/librte_bus_vdev.a 00:01:09.155 [208/268] Linking static target lib/librte_security.a 00:01:09.155 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:09.155 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:09.155 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:09.155 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:09.155 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:09.155 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:09.155 [215/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:09.155 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:09.155 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:09.155 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.155 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.414 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.414 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.414 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:09.414 [223/268] Linking static target lib/librte_ethdev.a 00:01:09.414 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:09.414 [225/268] Linking static target lib/librte_cryptodev.a 00:01:09.671 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.604 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.537 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.439 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.697 [230/268] Linking target lib/librte_eal.so.24.1 00:01:13.697 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.697 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:13.697 [233/268] Linking target lib/librte_ring.so.24.1 00:01:13.697 [234/268] Linking target lib/librte_meter.so.24.1 00:01:13.697 [235/268] Linking target lib/librte_timer.so.24.1 00:01:13.697 [236/268] Linking target lib/librte_pci.so.24.1 00:01:13.697 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:13.697 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:13.956 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:13.956 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:13.956 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:13.956 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:13.956 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:13.956 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:13.956 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:13.956 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:13.956 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:14.214 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:14.214 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:14.214 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:14.214 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:14.214 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:14.214 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:14.214 [254/268] Linking target lib/librte_net.so.24.1 00:01:14.214 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:14.472 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:14.472 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:14.472 [258/268] Linking target lib/librte_security.so.24.1 00:01:14.472 [259/268] Linking target lib/librte_hash.so.24.1 00:01:14.472 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:14.472 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:14.472 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:14.730 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:14.730 [264/268] Linking target lib/librte_power.so.24.1 00:01:17.263 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:17.263 [266/268] Linking static target lib/librte_vhost.a 00:01:17.829 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.086 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:18.086 INFO: autodetecting backend as ninja 00:01:18.086 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:19.018 CC lib/log/log.o 00:01:19.019 CC lib/log/log_flags.o 00:01:19.019 CC lib/log/log_deprecated.o 00:01:19.019 CC lib/ut_mock/mock.o 00:01:19.019 CC lib/ut/ut.o 00:01:19.019 LIB libspdk_log.a 00:01:19.019 LIB libspdk_ut_mock.a 00:01:19.019 LIB libspdk_ut.a 00:01:19.019 SO libspdk_log.so.7.0 00:01:19.019 SO libspdk_ut_mock.so.6.0 00:01:19.019 SO libspdk_ut.so.2.0 00:01:19.277 SYMLINK libspdk_ut_mock.so 00:01:19.277 SYMLINK libspdk_ut.so 00:01:19.277 SYMLINK libspdk_log.so 00:01:19.277 CC lib/dma/dma.o 00:01:19.277 CXX lib/trace_parser/trace.o 00:01:19.277 CC lib/ioat/ioat.o 00:01:19.277 CC lib/util/base64.o 00:01:19.277 CC lib/util/bit_array.o 00:01:19.277 CC lib/util/cpuset.o 00:01:19.277 CC lib/util/crc16.o 00:01:19.277 CC lib/util/crc32.o 00:01:19.277 CC lib/util/crc32c.o 00:01:19.277 CC lib/util/crc32_ieee.o 00:01:19.277 CC lib/util/crc64.o 00:01:19.277 CC lib/util/dif.o 00:01:19.277 CC lib/util/fd.o 00:01:19.277 CC lib/util/fd_group.o 00:01:19.277 CC lib/util/file.o 00:01:19.277 CC lib/util/hexlify.o 00:01:19.277 CC lib/util/iov.o 00:01:19.277 CC lib/util/math.o 00:01:19.277 CC lib/util/net.o 00:01:19.277 CC lib/util/pipe.o 00:01:19.277 CC lib/util/strerror_tls.o 00:01:19.277 CC lib/util/string.o 00:01:19.277 CC lib/util/uuid.o 00:01:19.277 CC lib/util/xor.o 00:01:19.277 CC lib/util/zipf.o 00:01:19.535 CC lib/vfio_user/host/vfio_user_pci.o 00:01:19.535 CC lib/vfio_user/host/vfio_user.o 00:01:19.535 LIB libspdk_dma.a 00:01:19.535 LIB libspdk_ioat.a 00:01:19.793 SO libspdk_dma.so.4.0 00:01:19.793 SO libspdk_ioat.so.7.0 00:01:19.793 SYMLINK libspdk_dma.so 00:01:19.793 SYMLINK libspdk_ioat.so 00:01:19.793 LIB libspdk_vfio_user.a 00:01:19.793 SO libspdk_vfio_user.so.5.0 00:01:19.793 SYMLINK libspdk_vfio_user.so 00:01:19.793 LIB libspdk_util.a 00:01:20.051 SO libspdk_util.so.9.1 00:01:20.051 SYMLINK libspdk_util.so 00:01:20.310 LIB libspdk_trace_parser.a 00:01:20.310 CC lib/rdma_utils/rdma_utils.o 00:01:20.310 CC lib/conf/conf.o 00:01:20.310 CC lib/vmd/vmd.o 00:01:20.310 CC lib/rdma_provider/common.o 00:01:20.310 CC lib/idxd/idxd.o 00:01:20.310 CC lib/env_dpdk/env.o 00:01:20.310 CC lib/json/json_parse.o 00:01:20.310 CC lib/vmd/led.o 00:01:20.310 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:20.310 CC lib/idxd/idxd_user.o 00:01:20.310 CC lib/env_dpdk/memory.o 00:01:20.310 CC lib/json/json_util.o 00:01:20.310 CC lib/env_dpdk/pci.o 00:01:20.310 CC lib/idxd/idxd_kernel.o 00:01:20.310 CC lib/json/json_write.o 00:01:20.310 CC lib/env_dpdk/init.o 00:01:20.310 CC lib/env_dpdk/threads.o 00:01:20.310 CC lib/env_dpdk/pci_ioat.o 00:01:20.310 CC lib/env_dpdk/pci_virtio.o 00:01:20.310 CC lib/env_dpdk/pci_vmd.o 00:01:20.310 CC lib/env_dpdk/pci_idxd.o 00:01:20.310 CC lib/env_dpdk/pci_event.o 00:01:20.310 CC lib/env_dpdk/sigbus_handler.o 00:01:20.310 CC lib/env_dpdk/pci_dpdk.o 00:01:20.310 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:20.310 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:20.310 SO libspdk_trace_parser.so.5.0 00:01:20.310 SYMLINK libspdk_trace_parser.so 00:01:20.568 LIB libspdk_rdma_provider.a 00:01:20.568 SO libspdk_rdma_provider.so.6.0 00:01:20.568 LIB libspdk_conf.a 00:01:20.568 SO libspdk_conf.so.6.0 00:01:20.568 SYMLINK libspdk_rdma_provider.so 00:01:20.568 LIB libspdk_json.a 00:01:20.568 SYMLINK libspdk_conf.so 00:01:20.568 SO libspdk_json.so.6.0 00:01:20.568 LIB libspdk_rdma_utils.a 00:01:20.826 SO libspdk_rdma_utils.so.1.0 00:01:20.826 SYMLINK libspdk_json.so 00:01:20.826 SYMLINK libspdk_rdma_utils.so 00:01:20.826 CC lib/jsonrpc/jsonrpc_server.o 00:01:20.826 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:20.826 CC lib/jsonrpc/jsonrpc_client.o 00:01:20.826 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:20.826 LIB libspdk_idxd.a 00:01:20.826 SO libspdk_idxd.so.12.0 00:01:21.118 SYMLINK libspdk_idxd.so 00:01:21.118 LIB libspdk_vmd.a 00:01:21.118 SO libspdk_vmd.so.6.0 00:01:21.118 SYMLINK libspdk_vmd.so 00:01:21.118 LIB libspdk_jsonrpc.a 00:01:21.118 SO libspdk_jsonrpc.so.6.0 00:01:21.378 SYMLINK libspdk_jsonrpc.so 00:01:21.378 CC lib/rpc/rpc.o 00:01:21.637 LIB libspdk_rpc.a 00:01:21.637 SO libspdk_rpc.so.6.0 00:01:21.637 SYMLINK libspdk_rpc.so 00:01:21.895 CC lib/trace/trace.o 00:01:21.895 CC lib/trace/trace_flags.o 00:01:21.895 CC lib/trace/trace_rpc.o 00:01:21.895 CC lib/notify/notify.o 00:01:21.895 CC lib/keyring/keyring.o 00:01:21.895 CC lib/keyring/keyring_rpc.o 00:01:21.895 CC lib/notify/notify_rpc.o 00:01:22.153 LIB libspdk_notify.a 00:01:22.153 SO libspdk_notify.so.6.0 00:01:22.153 LIB libspdk_keyring.a 00:01:22.153 SYMLINK libspdk_notify.so 00:01:22.153 LIB libspdk_trace.a 00:01:22.153 SO libspdk_keyring.so.1.0 00:01:22.153 SO libspdk_trace.so.10.0 00:01:22.153 SYMLINK libspdk_keyring.so 00:01:22.153 SYMLINK libspdk_trace.so 00:01:22.410 CC lib/thread/thread.o 00:01:22.410 CC lib/thread/iobuf.o 00:01:22.410 CC lib/sock/sock.o 00:01:22.410 CC lib/sock/sock_rpc.o 00:01:22.410 LIB libspdk_env_dpdk.a 00:01:22.410 SO libspdk_env_dpdk.so.15.0 00:01:22.668 SYMLINK libspdk_env_dpdk.so 00:01:22.926 LIB libspdk_sock.a 00:01:22.926 SO libspdk_sock.so.10.0 00:01:22.926 SYMLINK libspdk_sock.so 00:01:23.205 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:23.205 CC lib/nvme/nvme_ctrlr.o 00:01:23.205 CC lib/nvme/nvme_fabric.o 00:01:23.205 CC lib/nvme/nvme_ns_cmd.o 00:01:23.205 CC lib/nvme/nvme_ns.o 00:01:23.205 CC lib/nvme/nvme_pcie_common.o 00:01:23.205 CC lib/nvme/nvme_pcie.o 00:01:23.205 CC lib/nvme/nvme_qpair.o 00:01:23.205 CC lib/nvme/nvme.o 00:01:23.205 CC lib/nvme/nvme_quirks.o 00:01:23.205 CC lib/nvme/nvme_transport.o 00:01:23.205 CC lib/nvme/nvme_discovery.o 00:01:23.205 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:23.205 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:23.205 CC lib/nvme/nvme_tcp.o 00:01:23.205 CC lib/nvme/nvme_opal.o 00:01:23.205 CC lib/nvme/nvme_io_msg.o 00:01:23.205 CC lib/nvme/nvme_poll_group.o 00:01:23.205 CC lib/nvme/nvme_zns.o 00:01:23.205 CC lib/nvme/nvme_stubs.o 00:01:23.205 CC lib/nvme/nvme_auth.o 00:01:23.205 CC lib/nvme/nvme_cuse.o 00:01:23.205 CC lib/nvme/nvme_rdma.o 00:01:23.205 CC lib/nvme/nvme_vfio_user.o 00:01:24.141 LIB libspdk_thread.a 00:01:24.141 SO libspdk_thread.so.10.1 00:01:24.141 SYMLINK libspdk_thread.so 00:01:24.400 CC lib/accel/accel.o 00:01:24.400 CC lib/vfu_tgt/tgt_endpoint.o 00:01:24.400 CC lib/init/json_config.o 00:01:24.400 CC lib/blob/blobstore.o 00:01:24.400 CC lib/accel/accel_rpc.o 00:01:24.400 CC lib/virtio/virtio.o 00:01:24.400 CC lib/init/subsystem.o 00:01:24.400 CC lib/accel/accel_sw.o 00:01:24.400 CC lib/vfu_tgt/tgt_rpc.o 00:01:24.400 CC lib/virtio/virtio_vhost_user.o 00:01:24.400 CC lib/init/subsystem_rpc.o 00:01:24.400 CC lib/blob/request.o 00:01:24.400 CC lib/virtio/virtio_vfio_user.o 00:01:24.400 CC lib/blob/zeroes.o 00:01:24.400 CC lib/init/rpc.o 00:01:24.400 CC lib/virtio/virtio_pci.o 00:01:24.400 CC lib/blob/blob_bs_dev.o 00:01:24.659 LIB libspdk_init.a 00:01:24.659 SO libspdk_init.so.5.0 00:01:24.659 LIB libspdk_virtio.a 00:01:24.659 LIB libspdk_vfu_tgt.a 00:01:24.659 SYMLINK libspdk_init.so 00:01:24.659 SO libspdk_vfu_tgt.so.3.0 00:01:24.659 SO libspdk_virtio.so.7.0 00:01:24.659 SYMLINK libspdk_vfu_tgt.so 00:01:24.659 SYMLINK libspdk_virtio.so 00:01:24.918 CC lib/event/app.o 00:01:24.918 CC lib/event/reactor.o 00:01:24.918 CC lib/event/log_rpc.o 00:01:24.918 CC lib/event/app_rpc.o 00:01:24.918 CC lib/event/scheduler_static.o 00:01:25.177 LIB libspdk_event.a 00:01:25.177 SO libspdk_event.so.14.0 00:01:25.457 LIB libspdk_accel.a 00:01:25.457 SYMLINK libspdk_event.so 00:01:25.457 SO libspdk_accel.so.15.1 00:01:25.457 SYMLINK libspdk_accel.so 00:01:25.457 LIB libspdk_nvme.a 00:01:25.457 SO libspdk_nvme.so.13.1 00:01:25.457 CC lib/bdev/bdev.o 00:01:25.715 CC lib/bdev/bdev_rpc.o 00:01:25.715 CC lib/bdev/bdev_zone.o 00:01:25.715 CC lib/bdev/part.o 00:01:25.715 CC lib/bdev/scsi_nvme.o 00:01:25.973 SYMLINK libspdk_nvme.so 00:01:27.348 LIB libspdk_blob.a 00:01:27.348 SO libspdk_blob.so.11.0 00:01:27.348 SYMLINK libspdk_blob.so 00:01:27.605 CC lib/lvol/lvol.o 00:01:27.605 CC lib/blobfs/blobfs.o 00:01:27.605 CC lib/blobfs/tree.o 00:01:28.170 LIB libspdk_bdev.a 00:01:28.170 SO libspdk_bdev.so.15.1 00:01:28.170 SYMLINK libspdk_bdev.so 00:01:28.435 LIB libspdk_blobfs.a 00:01:28.435 SO libspdk_blobfs.so.10.0 00:01:28.435 CC lib/nvmf/ctrlr.o 00:01:28.435 CC lib/scsi/dev.o 00:01:28.435 CC lib/nvmf/ctrlr_discovery.o 00:01:28.435 CC lib/nbd/nbd.o 00:01:28.435 CC lib/nbd/nbd_rpc.o 00:01:28.435 CC lib/scsi/lun.o 00:01:28.435 CC lib/ublk/ublk.o 00:01:28.435 CC lib/nvmf/ctrlr_bdev.o 00:01:28.435 CC lib/scsi/port.o 00:01:28.435 CC lib/nvmf/subsystem.o 00:01:28.435 CC lib/ublk/ublk_rpc.o 00:01:28.435 CC lib/ftl/ftl_core.o 00:01:28.435 CC lib/scsi/scsi.o 00:01:28.435 CC lib/nvmf/nvmf.o 00:01:28.435 CC lib/nvmf/nvmf_rpc.o 00:01:28.435 CC lib/scsi/scsi_bdev.o 00:01:28.435 CC lib/ftl/ftl_init.o 00:01:28.435 CC lib/ftl/ftl_layout.o 00:01:28.435 CC lib/scsi/scsi_pr.o 00:01:28.435 CC lib/nvmf/transport.o 00:01:28.435 CC lib/scsi/scsi_rpc.o 00:01:28.435 CC lib/ftl/ftl_debug.o 00:01:28.435 CC lib/nvmf/tcp.o 00:01:28.435 CC lib/nvmf/mdns_server.o 00:01:28.435 CC lib/nvmf/stubs.o 00:01:28.435 CC lib/ftl/ftl_io.o 00:01:28.435 CC lib/scsi/task.o 00:01:28.435 CC lib/ftl/ftl_sb.o 00:01:28.435 CC lib/nvmf/vfio_user.o 00:01:28.435 CC lib/nvmf/rdma.o 00:01:28.435 CC lib/nvmf/auth.o 00:01:28.435 CC lib/ftl/ftl_l2p.o 00:01:28.435 CC lib/ftl/ftl_l2p_flat.o 00:01:28.435 CC lib/ftl/ftl_nv_cache.o 00:01:28.435 CC lib/ftl/ftl_band.o 00:01:28.435 CC lib/ftl/ftl_band_ops.o 00:01:28.435 CC lib/ftl/ftl_writer.o 00:01:28.435 CC lib/ftl/ftl_rq.o 00:01:28.435 CC lib/ftl/ftl_reloc.o 00:01:28.435 CC lib/ftl/ftl_l2p_cache.o 00:01:28.435 CC lib/ftl/ftl_p2l.o 00:01:28.435 CC lib/ftl/mngt/ftl_mngt.o 00:01:28.435 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:28.435 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:28.435 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:28.435 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:28.435 LIB libspdk_lvol.a 00:01:28.435 SYMLINK libspdk_blobfs.so 00:01:28.435 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:28.435 SO libspdk_lvol.so.10.0 00:01:28.695 SYMLINK libspdk_lvol.so 00:01:28.695 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:28.695 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:28.695 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:28.695 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:28.695 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:28.695 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:28.695 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:28.960 CC lib/ftl/utils/ftl_conf.o 00:01:28.960 CC lib/ftl/utils/ftl_md.o 00:01:28.960 CC lib/ftl/utils/ftl_mempool.o 00:01:28.960 CC lib/ftl/utils/ftl_bitmap.o 00:01:28.960 CC lib/ftl/utils/ftl_property.o 00:01:28.960 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:28.960 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:28.960 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:28.960 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:28.960 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:28.960 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:28.960 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:28.960 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:28.960 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:28.960 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:28.960 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:28.960 CC lib/ftl/base/ftl_base_dev.o 00:01:28.960 CC lib/ftl/base/ftl_base_bdev.o 00:01:29.218 CC lib/ftl/ftl_trace.o 00:01:29.219 LIB libspdk_nbd.a 00:01:29.219 SO libspdk_nbd.so.7.0 00:01:29.219 LIB libspdk_scsi.a 00:01:29.219 SYMLINK libspdk_nbd.so 00:01:29.477 SO libspdk_scsi.so.9.0 00:01:29.477 SYMLINK libspdk_scsi.so 00:01:29.477 LIB libspdk_ublk.a 00:01:29.477 SO libspdk_ublk.so.3.0 00:01:29.477 SYMLINK libspdk_ublk.so 00:01:29.477 CC lib/vhost/vhost.o 00:01:29.737 CC lib/iscsi/conn.o 00:01:29.737 CC lib/vhost/vhost_rpc.o 00:01:29.737 CC lib/vhost/vhost_scsi.o 00:01:29.737 CC lib/iscsi/init_grp.o 00:01:29.737 CC lib/vhost/vhost_blk.o 00:01:29.737 CC lib/iscsi/iscsi.o 00:01:29.737 CC lib/vhost/rte_vhost_user.o 00:01:29.737 CC lib/iscsi/md5.o 00:01:29.737 CC lib/iscsi/param.o 00:01:29.737 CC lib/iscsi/portal_grp.o 00:01:29.737 CC lib/iscsi/tgt_node.o 00:01:29.737 CC lib/iscsi/iscsi_subsystem.o 00:01:29.737 CC lib/iscsi/iscsi_rpc.o 00:01:29.737 CC lib/iscsi/task.o 00:01:29.995 LIB libspdk_ftl.a 00:01:29.995 SO libspdk_ftl.so.9.0 00:01:30.561 SYMLINK libspdk_ftl.so 00:01:30.820 LIB libspdk_vhost.a 00:01:30.820 SO libspdk_vhost.so.8.0 00:01:31.078 SYMLINK libspdk_vhost.so 00:01:31.078 LIB libspdk_nvmf.a 00:01:31.078 LIB libspdk_iscsi.a 00:01:31.078 SO libspdk_iscsi.so.8.0 00:01:31.078 SO libspdk_nvmf.so.19.0 00:01:31.336 SYMLINK libspdk_iscsi.so 00:01:31.336 SYMLINK libspdk_nvmf.so 00:01:31.595 CC module/vfu_device/vfu_virtio.o 00:01:31.595 CC module/vfu_device/vfu_virtio_blk.o 00:01:31.595 CC module/env_dpdk/env_dpdk_rpc.o 00:01:31.595 CC module/vfu_device/vfu_virtio_scsi.o 00:01:31.595 CC module/vfu_device/vfu_virtio_rpc.o 00:01:31.595 CC module/keyring/linux/keyring.o 00:01:31.595 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:31.595 CC module/scheduler/gscheduler/gscheduler.o 00:01:31.595 CC module/keyring/linux/keyring_rpc.o 00:01:31.595 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:31.595 CC module/sock/posix/posix.o 00:01:31.595 CC module/accel/dsa/accel_dsa.o 00:01:31.595 CC module/accel/iaa/accel_iaa.o 00:01:31.595 CC module/accel/dsa/accel_dsa_rpc.o 00:01:31.595 CC module/accel/iaa/accel_iaa_rpc.o 00:01:31.595 CC module/keyring/file/keyring.o 00:01:31.595 CC module/accel/ioat/accel_ioat.o 00:01:31.595 CC module/blob/bdev/blob_bdev.o 00:01:31.595 CC module/keyring/file/keyring_rpc.o 00:01:31.595 CC module/accel/ioat/accel_ioat_rpc.o 00:01:31.595 CC module/accel/error/accel_error.o 00:01:31.595 CC module/accel/error/accel_error_rpc.o 00:01:31.595 LIB libspdk_env_dpdk_rpc.a 00:01:31.860 SO libspdk_env_dpdk_rpc.so.6.0 00:01:31.860 SYMLINK libspdk_env_dpdk_rpc.so 00:01:31.860 LIB libspdk_keyring_linux.a 00:01:31.860 LIB libspdk_scheduler_gscheduler.a 00:01:31.860 LIB libspdk_keyring_file.a 00:01:31.860 LIB libspdk_scheduler_dpdk_governor.a 00:01:31.860 SO libspdk_keyring_linux.so.1.0 00:01:31.860 SO libspdk_scheduler_gscheduler.so.4.0 00:01:31.860 SO libspdk_keyring_file.so.1.0 00:01:31.860 LIB libspdk_accel_error.a 00:01:31.860 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:31.860 LIB libspdk_accel_ioat.a 00:01:31.860 LIB libspdk_scheduler_dynamic.a 00:01:31.860 LIB libspdk_accel_iaa.a 00:01:31.860 SO libspdk_accel_error.so.2.0 00:01:31.860 SYMLINK libspdk_scheduler_gscheduler.so 00:01:31.860 SO libspdk_scheduler_dynamic.so.4.0 00:01:31.860 SO libspdk_accel_ioat.so.6.0 00:01:31.860 SO libspdk_accel_iaa.so.3.0 00:01:31.860 SYMLINK libspdk_keyring_linux.so 00:01:31.860 SYMLINK libspdk_keyring_file.so 00:01:31.860 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:31.860 LIB libspdk_accel_dsa.a 00:01:31.860 SYMLINK libspdk_accel_error.so 00:01:31.860 LIB libspdk_blob_bdev.a 00:01:31.860 SYMLINK libspdk_scheduler_dynamic.so 00:01:31.860 SYMLINK libspdk_accel_ioat.so 00:01:31.860 SYMLINK libspdk_accel_iaa.so 00:01:31.860 SO libspdk_accel_dsa.so.5.0 00:01:31.860 SO libspdk_blob_bdev.so.11.0 00:01:32.118 SYMLINK libspdk_accel_dsa.so 00:01:32.118 SYMLINK libspdk_blob_bdev.so 00:01:32.118 LIB libspdk_vfu_device.a 00:01:32.376 SO libspdk_vfu_device.so.3.0 00:01:32.376 CC module/bdev/gpt/gpt.o 00:01:32.376 CC module/bdev/lvol/vbdev_lvol.o 00:01:32.376 CC module/bdev/gpt/vbdev_gpt.o 00:01:32.376 CC module/bdev/nvme/bdev_nvme.o 00:01:32.376 CC module/bdev/malloc/bdev_malloc.o 00:01:32.376 CC module/bdev/delay/vbdev_delay.o 00:01:32.376 CC module/bdev/error/vbdev_error.o 00:01:32.376 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:32.376 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:32.376 CC module/blobfs/bdev/blobfs_bdev.o 00:01:32.376 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:32.376 CC module/bdev/nvme/nvme_rpc.o 00:01:32.376 CC module/bdev/error/vbdev_error_rpc.o 00:01:32.376 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:32.376 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:32.376 CC module/bdev/passthru/vbdev_passthru.o 00:01:32.376 CC module/bdev/null/bdev_null.o 00:01:32.376 CC module/bdev/aio/bdev_aio_rpc.o 00:01:32.376 CC module/bdev/aio/bdev_aio.o 00:01:32.377 CC module/bdev/nvme/bdev_mdns_client.o 00:01:32.377 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:32.377 CC module/bdev/null/bdev_null_rpc.o 00:01:32.377 CC module/bdev/nvme/vbdev_opal.o 00:01:32.377 CC module/bdev/ftl/bdev_ftl.o 00:01:32.377 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:32.377 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:32.377 CC module/bdev/iscsi/bdev_iscsi.o 00:01:32.377 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:32.377 CC module/bdev/raid/bdev_raid.o 00:01:32.377 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:32.377 CC module/bdev/split/vbdev_split.o 00:01:32.377 CC module/bdev/raid/bdev_raid_rpc.o 00:01:32.377 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:32.377 CC module/bdev/split/vbdev_split_rpc.o 00:01:32.377 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:32.377 CC module/bdev/raid/bdev_raid_sb.o 00:01:32.377 CC module/bdev/raid/raid0.o 00:01:32.377 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:32.377 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:32.377 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:32.377 CC module/bdev/raid/raid1.o 00:01:32.377 CC module/bdev/raid/concat.o 00:01:32.377 SYMLINK libspdk_vfu_device.so 00:01:32.635 LIB libspdk_sock_posix.a 00:01:32.635 SO libspdk_sock_posix.so.6.0 00:01:32.635 LIB libspdk_blobfs_bdev.a 00:01:32.635 SYMLINK libspdk_sock_posix.so 00:01:32.635 SO libspdk_blobfs_bdev.so.6.0 00:01:32.635 LIB libspdk_bdev_split.a 00:01:32.635 LIB libspdk_bdev_error.a 00:01:32.635 SYMLINK libspdk_blobfs_bdev.so 00:01:32.635 SO libspdk_bdev_error.so.6.0 00:01:32.635 SO libspdk_bdev_split.so.6.0 00:01:32.893 LIB libspdk_bdev_ftl.a 00:01:32.893 LIB libspdk_bdev_null.a 00:01:32.893 SO libspdk_bdev_ftl.so.6.0 00:01:32.893 LIB libspdk_bdev_gpt.a 00:01:32.893 LIB libspdk_bdev_passthru.a 00:01:32.893 SYMLINK libspdk_bdev_error.so 00:01:32.893 SYMLINK libspdk_bdev_split.so 00:01:32.893 SO libspdk_bdev_null.so.6.0 00:01:32.893 SO libspdk_bdev_gpt.so.6.0 00:01:32.893 LIB libspdk_bdev_iscsi.a 00:01:32.893 SO libspdk_bdev_passthru.so.6.0 00:01:32.893 SYMLINK libspdk_bdev_ftl.so 00:01:32.893 LIB libspdk_bdev_delay.a 00:01:32.893 SO libspdk_bdev_iscsi.so.6.0 00:01:32.893 LIB libspdk_bdev_aio.a 00:01:32.893 LIB libspdk_bdev_zone_block.a 00:01:32.893 SYMLINK libspdk_bdev_null.so 00:01:32.893 SYMLINK libspdk_bdev_gpt.so 00:01:32.893 SO libspdk_bdev_delay.so.6.0 00:01:32.893 LIB libspdk_bdev_lvol.a 00:01:32.893 LIB libspdk_bdev_malloc.a 00:01:32.893 SO libspdk_bdev_aio.so.6.0 00:01:32.894 SYMLINK libspdk_bdev_passthru.so 00:01:32.894 SO libspdk_bdev_zone_block.so.6.0 00:01:32.894 SYMLINK libspdk_bdev_iscsi.so 00:01:32.894 SO libspdk_bdev_lvol.so.6.0 00:01:32.894 SO libspdk_bdev_malloc.so.6.0 00:01:32.894 SYMLINK libspdk_bdev_delay.so 00:01:32.894 SYMLINK libspdk_bdev_aio.so 00:01:32.894 SYMLINK libspdk_bdev_zone_block.so 00:01:32.894 SYMLINK libspdk_bdev_malloc.so 00:01:32.894 SYMLINK libspdk_bdev_lvol.so 00:01:32.894 LIB libspdk_bdev_virtio.a 00:01:33.151 SO libspdk_bdev_virtio.so.6.0 00:01:33.151 SYMLINK libspdk_bdev_virtio.so 00:01:33.409 LIB libspdk_bdev_raid.a 00:01:33.409 SO libspdk_bdev_raid.so.6.0 00:01:33.409 SYMLINK libspdk_bdev_raid.so 00:01:34.781 LIB libspdk_bdev_nvme.a 00:01:34.781 SO libspdk_bdev_nvme.so.7.0 00:01:34.781 SYMLINK libspdk_bdev_nvme.so 00:01:35.038 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:35.038 CC module/event/subsystems/keyring/keyring.o 00:01:35.038 CC module/event/subsystems/scheduler/scheduler.o 00:01:35.038 CC module/event/subsystems/iobuf/iobuf.o 00:01:35.038 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:35.038 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:35.038 CC module/event/subsystems/sock/sock.o 00:01:35.038 CC module/event/subsystems/vmd/vmd.o 00:01:35.038 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:35.296 LIB libspdk_event_keyring.a 00:01:35.296 LIB libspdk_event_vhost_blk.a 00:01:35.296 LIB libspdk_event_vfu_tgt.a 00:01:35.296 LIB libspdk_event_scheduler.a 00:01:35.296 LIB libspdk_event_vmd.a 00:01:35.296 LIB libspdk_event_sock.a 00:01:35.296 LIB libspdk_event_iobuf.a 00:01:35.296 SO libspdk_event_keyring.so.1.0 00:01:35.296 SO libspdk_event_vfu_tgt.so.3.0 00:01:35.296 SO libspdk_event_vhost_blk.so.3.0 00:01:35.296 SO libspdk_event_scheduler.so.4.0 00:01:35.296 SO libspdk_event_sock.so.5.0 00:01:35.296 SO libspdk_event_vmd.so.6.0 00:01:35.296 SO libspdk_event_iobuf.so.3.0 00:01:35.296 SYMLINK libspdk_event_keyring.so 00:01:35.296 SYMLINK libspdk_event_vhost_blk.so 00:01:35.296 SYMLINK libspdk_event_vfu_tgt.so 00:01:35.296 SYMLINK libspdk_event_scheduler.so 00:01:35.296 SYMLINK libspdk_event_sock.so 00:01:35.296 SYMLINK libspdk_event_vmd.so 00:01:35.296 SYMLINK libspdk_event_iobuf.so 00:01:35.553 CC module/event/subsystems/accel/accel.o 00:01:35.553 LIB libspdk_event_accel.a 00:01:35.553 SO libspdk_event_accel.so.6.0 00:01:35.811 SYMLINK libspdk_event_accel.so 00:01:35.811 CC module/event/subsystems/bdev/bdev.o 00:01:36.078 LIB libspdk_event_bdev.a 00:01:36.078 SO libspdk_event_bdev.so.6.0 00:01:36.078 SYMLINK libspdk_event_bdev.so 00:01:36.369 CC module/event/subsystems/ublk/ublk.o 00:01:36.369 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:36.369 CC module/event/subsystems/nbd/nbd.o 00:01:36.369 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:36.369 CC module/event/subsystems/scsi/scsi.o 00:01:36.369 LIB libspdk_event_nbd.a 00:01:36.369 LIB libspdk_event_ublk.a 00:01:36.369 LIB libspdk_event_scsi.a 00:01:36.369 SO libspdk_event_nbd.so.6.0 00:01:36.369 SO libspdk_event_ublk.so.3.0 00:01:36.369 SO libspdk_event_scsi.so.6.0 00:01:36.627 SYMLINK libspdk_event_nbd.so 00:01:36.627 SYMLINK libspdk_event_ublk.so 00:01:36.627 SYMLINK libspdk_event_scsi.so 00:01:36.627 LIB libspdk_event_nvmf.a 00:01:36.627 SO libspdk_event_nvmf.so.6.0 00:01:36.627 SYMLINK libspdk_event_nvmf.so 00:01:36.627 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:36.627 CC module/event/subsystems/iscsi/iscsi.o 00:01:36.885 LIB libspdk_event_vhost_scsi.a 00:01:36.885 SO libspdk_event_vhost_scsi.so.3.0 00:01:36.885 LIB libspdk_event_iscsi.a 00:01:36.885 SO libspdk_event_iscsi.so.6.0 00:01:36.885 SYMLINK libspdk_event_vhost_scsi.so 00:01:36.885 SYMLINK libspdk_event_iscsi.so 00:01:37.144 SO libspdk.so.6.0 00:01:37.144 SYMLINK libspdk.so 00:01:37.144 CXX app/trace/trace.o 00:01:37.144 CC app/trace_record/trace_record.o 00:01:37.144 CC app/spdk_lspci/spdk_lspci.o 00:01:37.144 TEST_HEADER include/spdk/accel.h 00:01:37.144 TEST_HEADER include/spdk/accel_module.h 00:01:37.144 TEST_HEADER include/spdk/assert.h 00:01:37.144 CC test/rpc_client/rpc_client_test.o 00:01:37.144 CC app/spdk_nvme_perf/perf.o 00:01:37.144 TEST_HEADER include/spdk/barrier.h 00:01:37.144 TEST_HEADER include/spdk/base64.h 00:01:37.144 TEST_HEADER include/spdk/bdev.h 00:01:37.144 TEST_HEADER include/spdk/bdev_module.h 00:01:37.144 TEST_HEADER include/spdk/bdev_zone.h 00:01:37.144 CC app/spdk_top/spdk_top.o 00:01:37.144 TEST_HEADER include/spdk/bit_array.h 00:01:37.144 CC app/spdk_nvme_discover/discovery_aer.o 00:01:37.144 TEST_HEADER include/spdk/blob_bdev.h 00:01:37.144 TEST_HEADER include/spdk/bit_pool.h 00:01:37.144 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:37.144 CC app/spdk_nvme_identify/identify.o 00:01:37.144 TEST_HEADER include/spdk/blobfs.h 00:01:37.144 TEST_HEADER include/spdk/blob.h 00:01:37.144 TEST_HEADER include/spdk/conf.h 00:01:37.144 TEST_HEADER include/spdk/config.h 00:01:37.144 TEST_HEADER include/spdk/cpuset.h 00:01:37.144 TEST_HEADER include/spdk/crc16.h 00:01:37.144 TEST_HEADER include/spdk/crc32.h 00:01:37.144 TEST_HEADER include/spdk/crc64.h 00:01:37.144 TEST_HEADER include/spdk/dif.h 00:01:37.144 TEST_HEADER include/spdk/dma.h 00:01:37.144 TEST_HEADER include/spdk/endian.h 00:01:37.144 TEST_HEADER include/spdk/env_dpdk.h 00:01:37.144 TEST_HEADER include/spdk/env.h 00:01:37.144 TEST_HEADER include/spdk/event.h 00:01:37.144 TEST_HEADER include/spdk/fd_group.h 00:01:37.144 TEST_HEADER include/spdk/fd.h 00:01:37.144 TEST_HEADER include/spdk/file.h 00:01:37.144 TEST_HEADER include/spdk/ftl.h 00:01:37.144 TEST_HEADER include/spdk/gpt_spec.h 00:01:37.144 TEST_HEADER include/spdk/hexlify.h 00:01:37.144 TEST_HEADER include/spdk/histogram_data.h 00:01:37.144 TEST_HEADER include/spdk/idxd.h 00:01:37.144 TEST_HEADER include/spdk/idxd_spec.h 00:01:37.144 TEST_HEADER include/spdk/init.h 00:01:37.144 TEST_HEADER include/spdk/ioat.h 00:01:37.144 TEST_HEADER include/spdk/ioat_spec.h 00:01:37.144 TEST_HEADER include/spdk/iscsi_spec.h 00:01:37.144 TEST_HEADER include/spdk/json.h 00:01:37.144 TEST_HEADER include/spdk/keyring.h 00:01:37.144 TEST_HEADER include/spdk/jsonrpc.h 00:01:37.409 TEST_HEADER include/spdk/keyring_module.h 00:01:37.409 TEST_HEADER include/spdk/likely.h 00:01:37.409 TEST_HEADER include/spdk/log.h 00:01:37.409 TEST_HEADER include/spdk/lvol.h 00:01:37.409 TEST_HEADER include/spdk/memory.h 00:01:37.409 TEST_HEADER include/spdk/mmio.h 00:01:37.409 TEST_HEADER include/spdk/nbd.h 00:01:37.409 TEST_HEADER include/spdk/net.h 00:01:37.409 TEST_HEADER include/spdk/notify.h 00:01:37.409 TEST_HEADER include/spdk/nvme.h 00:01:37.409 TEST_HEADER include/spdk/nvme_intel.h 00:01:37.409 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:37.409 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:37.409 TEST_HEADER include/spdk/nvme_zns.h 00:01:37.409 TEST_HEADER include/spdk/nvme_spec.h 00:01:37.409 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:37.409 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:37.409 TEST_HEADER include/spdk/nvmf.h 00:01:37.409 TEST_HEADER include/spdk/nvmf_spec.h 00:01:37.409 TEST_HEADER include/spdk/nvmf_transport.h 00:01:37.409 TEST_HEADER include/spdk/opal.h 00:01:37.409 TEST_HEADER include/spdk/opal_spec.h 00:01:37.409 TEST_HEADER include/spdk/pci_ids.h 00:01:37.409 TEST_HEADER include/spdk/pipe.h 00:01:37.409 TEST_HEADER include/spdk/queue.h 00:01:37.409 TEST_HEADER include/spdk/reduce.h 00:01:37.409 TEST_HEADER include/spdk/rpc.h 00:01:37.409 TEST_HEADER include/spdk/scheduler.h 00:01:37.409 TEST_HEADER include/spdk/scsi.h 00:01:37.409 TEST_HEADER include/spdk/scsi_spec.h 00:01:37.409 TEST_HEADER include/spdk/sock.h 00:01:37.409 TEST_HEADER include/spdk/stdinc.h 00:01:37.409 TEST_HEADER include/spdk/string.h 00:01:37.409 TEST_HEADER include/spdk/thread.h 00:01:37.409 TEST_HEADER include/spdk/trace.h 00:01:37.409 TEST_HEADER include/spdk/trace_parser.h 00:01:37.409 TEST_HEADER include/spdk/tree.h 00:01:37.409 TEST_HEADER include/spdk/ublk.h 00:01:37.409 TEST_HEADER include/spdk/util.h 00:01:37.409 TEST_HEADER include/spdk/uuid.h 00:01:37.409 TEST_HEADER include/spdk/version.h 00:01:37.409 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:37.409 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:37.409 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:37.409 TEST_HEADER include/spdk/vhost.h 00:01:37.409 TEST_HEADER include/spdk/vmd.h 00:01:37.409 TEST_HEADER include/spdk/xor.h 00:01:37.409 TEST_HEADER include/spdk/zipf.h 00:01:37.409 CXX test/cpp_headers/accel.o 00:01:37.409 CXX test/cpp_headers/accel_module.o 00:01:37.409 CXX test/cpp_headers/assert.o 00:01:37.409 CXX test/cpp_headers/barrier.o 00:01:37.409 CXX test/cpp_headers/base64.o 00:01:37.409 CC app/nvmf_tgt/nvmf_main.o 00:01:37.409 CXX test/cpp_headers/bdev.o 00:01:37.409 CXX test/cpp_headers/bdev_module.o 00:01:37.409 CXX test/cpp_headers/bdev_zone.o 00:01:37.409 CXX test/cpp_headers/bit_array.o 00:01:37.409 CXX test/cpp_headers/bit_pool.o 00:01:37.409 CXX test/cpp_headers/blob_bdev.o 00:01:37.410 CXX test/cpp_headers/blobfs_bdev.o 00:01:37.410 CXX test/cpp_headers/blobfs.o 00:01:37.410 CXX test/cpp_headers/blob.o 00:01:37.410 CXX test/cpp_headers/conf.o 00:01:37.410 CXX test/cpp_headers/config.o 00:01:37.410 CC app/spdk_dd/spdk_dd.o 00:01:37.410 CXX test/cpp_headers/cpuset.o 00:01:37.410 CXX test/cpp_headers/crc16.o 00:01:37.410 CC app/iscsi_tgt/iscsi_tgt.o 00:01:37.410 CXX test/cpp_headers/crc32.o 00:01:37.410 CC app/spdk_tgt/spdk_tgt.o 00:01:37.410 CC test/app/histogram_perf/histogram_perf.o 00:01:37.410 CC examples/util/zipf/zipf.o 00:01:37.410 CC test/env/vtophys/vtophys.o 00:01:37.410 CC test/app/jsoncat/jsoncat.o 00:01:37.410 CC test/thread/poller_perf/poller_perf.o 00:01:37.410 CC examples/ioat/perf/perf.o 00:01:37.410 CC examples/ioat/verify/verify.o 00:01:37.410 CC test/app/stub/stub.o 00:01:37.410 CC test/env/memory/memory_ut.o 00:01:37.410 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:37.410 CC test/env/pci/pci_ut.o 00:01:37.410 CC app/fio/nvme/fio_plugin.o 00:01:37.410 CC test/dma/test_dma/test_dma.o 00:01:37.410 CC test/app/bdev_svc/bdev_svc.o 00:01:37.410 CC app/fio/bdev/fio_plugin.o 00:01:37.672 LINK spdk_lspci 00:01:37.672 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:37.672 CC test/env/mem_callbacks/mem_callbacks.o 00:01:37.672 LINK rpc_client_test 00:01:37.672 LINK spdk_nvme_discover 00:01:37.672 LINK nvmf_tgt 00:01:37.672 LINK poller_perf 00:01:37.672 LINK vtophys 00:01:37.672 LINK jsoncat 00:01:37.672 CXX test/cpp_headers/crc64.o 00:01:37.672 LINK interrupt_tgt 00:01:37.672 LINK zipf 00:01:37.672 CXX test/cpp_headers/dif.o 00:01:37.672 CXX test/cpp_headers/dma.o 00:01:37.672 LINK histogram_perf 00:01:37.672 LINK env_dpdk_post_init 00:01:37.672 LINK spdk_trace_record 00:01:37.672 CXX test/cpp_headers/env_dpdk.o 00:01:37.672 CXX test/cpp_headers/endian.o 00:01:37.672 CXX test/cpp_headers/env.o 00:01:37.672 CXX test/cpp_headers/event.o 00:01:37.672 CXX test/cpp_headers/fd_group.o 00:01:37.672 CXX test/cpp_headers/fd.o 00:01:37.672 CXX test/cpp_headers/file.o 00:01:37.672 CXX test/cpp_headers/ftl.o 00:01:37.672 CXX test/cpp_headers/gpt_spec.o 00:01:37.672 LINK stub 00:01:37.939 LINK iscsi_tgt 00:01:37.940 CXX test/cpp_headers/hexlify.o 00:01:37.940 LINK bdev_svc 00:01:37.940 CXX test/cpp_headers/histogram_data.o 00:01:37.940 CXX test/cpp_headers/idxd.o 00:01:37.940 LINK verify 00:01:37.940 LINK ioat_perf 00:01:37.940 LINK spdk_tgt 00:01:37.940 CXX test/cpp_headers/idxd_spec.o 00:01:37.940 CXX test/cpp_headers/init.o 00:01:37.940 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:37.940 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:37.940 CXX test/cpp_headers/ioat.o 00:01:37.940 CXX test/cpp_headers/ioat_spec.o 00:01:37.940 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:37.940 CXX test/cpp_headers/iscsi_spec.o 00:01:37.940 CXX test/cpp_headers/json.o 00:01:37.940 CXX test/cpp_headers/jsonrpc.o 00:01:38.202 LINK spdk_dd 00:01:38.202 LINK spdk_trace 00:01:38.202 CXX test/cpp_headers/keyring.o 00:01:38.202 CXX test/cpp_headers/keyring_module.o 00:01:38.202 CXX test/cpp_headers/likely.o 00:01:38.202 CXX test/cpp_headers/log.o 00:01:38.202 CXX test/cpp_headers/lvol.o 00:01:38.202 CXX test/cpp_headers/memory.o 00:01:38.202 CXX test/cpp_headers/mmio.o 00:01:38.202 CXX test/cpp_headers/nbd.o 00:01:38.202 CXX test/cpp_headers/net.o 00:01:38.202 CXX test/cpp_headers/notify.o 00:01:38.203 CXX test/cpp_headers/nvme.o 00:01:38.203 LINK pci_ut 00:01:38.203 CXX test/cpp_headers/nvme_intel.o 00:01:38.203 CXX test/cpp_headers/nvme_ocssd.o 00:01:38.203 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:38.203 CXX test/cpp_headers/nvme_spec.o 00:01:38.203 CXX test/cpp_headers/nvme_zns.o 00:01:38.203 LINK test_dma 00:01:38.203 CXX test/cpp_headers/nvmf_cmd.o 00:01:38.203 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:38.203 CXX test/cpp_headers/nvmf.o 00:01:38.203 CXX test/cpp_headers/nvmf_spec.o 00:01:38.203 CXX test/cpp_headers/nvmf_transport.o 00:01:38.467 CC test/event/event_perf/event_perf.o 00:01:38.467 CXX test/cpp_headers/opal.o 00:01:38.467 CC test/event/reactor/reactor.o 00:01:38.467 CXX test/cpp_headers/opal_spec.o 00:01:38.467 LINK nvme_fuzz 00:01:38.467 CXX test/cpp_headers/pci_ids.o 00:01:38.467 CC test/event/reactor_perf/reactor_perf.o 00:01:38.467 CXX test/cpp_headers/pipe.o 00:01:38.467 CC test/event/app_repeat/app_repeat.o 00:01:38.467 CC examples/sock/hello_world/hello_sock.o 00:01:38.467 CC examples/vmd/lsvmd/lsvmd.o 00:01:38.467 CC examples/idxd/perf/perf.o 00:01:38.467 CXX test/cpp_headers/queue.o 00:01:38.467 CC examples/vmd/led/led.o 00:01:38.467 LINK spdk_nvme 00:01:38.467 LINK spdk_bdev 00:01:38.467 CC examples/thread/thread/thread_ex.o 00:01:38.467 CXX test/cpp_headers/reduce.o 00:01:38.467 CXX test/cpp_headers/rpc.o 00:01:38.467 CC test/event/scheduler/scheduler.o 00:01:38.467 CXX test/cpp_headers/scheduler.o 00:01:38.467 CXX test/cpp_headers/scsi.o 00:01:38.467 CXX test/cpp_headers/scsi_spec.o 00:01:38.467 CXX test/cpp_headers/sock.o 00:01:38.467 CXX test/cpp_headers/stdinc.o 00:01:38.467 CXX test/cpp_headers/string.o 00:01:38.467 CXX test/cpp_headers/thread.o 00:01:38.467 CXX test/cpp_headers/trace.o 00:01:38.467 CXX test/cpp_headers/trace_parser.o 00:01:38.731 CXX test/cpp_headers/tree.o 00:01:38.731 CXX test/cpp_headers/ublk.o 00:01:38.731 CXX test/cpp_headers/util.o 00:01:38.731 CXX test/cpp_headers/uuid.o 00:01:38.731 LINK reactor 00:01:38.731 LINK event_perf 00:01:38.731 CXX test/cpp_headers/version.o 00:01:38.731 CXX test/cpp_headers/vfio_user_pci.o 00:01:38.731 CXX test/cpp_headers/vfio_user_spec.o 00:01:38.731 CXX test/cpp_headers/vhost.o 00:01:38.731 CXX test/cpp_headers/vmd.o 00:01:38.731 LINK reactor_perf 00:01:38.731 CXX test/cpp_headers/xor.o 00:01:38.731 CXX test/cpp_headers/zipf.o 00:01:38.731 CC app/vhost/vhost.o 00:01:38.731 LINK mem_callbacks 00:01:38.731 LINK lsvmd 00:01:38.731 LINK app_repeat 00:01:38.731 LINK led 00:01:38.731 LINK spdk_nvme_perf 00:01:38.731 LINK spdk_nvme_identify 00:01:38.990 LINK vhost_fuzz 00:01:38.990 LINK spdk_top 00:01:38.990 LINK hello_sock 00:01:38.990 LINK scheduler 00:01:38.990 CC test/nvme/overhead/overhead.o 00:01:38.990 CC test/nvme/startup/startup.o 00:01:38.990 CC test/nvme/aer/aer.o 00:01:38.990 CC test/nvme/e2edp/nvme_dp.o 00:01:38.990 CC test/nvme/err_injection/err_injection.o 00:01:38.990 CC test/nvme/reset/reset.o 00:01:38.990 LINK thread 00:01:38.990 CC test/nvme/sgl/sgl.o 00:01:38.990 CC test/nvme/reserve/reserve.o 00:01:38.990 CC test/accel/dif/dif.o 00:01:38.990 CC test/blobfs/mkfs/mkfs.o 00:01:38.990 CC test/nvme/simple_copy/simple_copy.o 00:01:38.990 CC test/nvme/connect_stress/connect_stress.o 00:01:38.990 CC test/nvme/boot_partition/boot_partition.o 00:01:38.990 CC test/nvme/compliance/nvme_compliance.o 00:01:38.990 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:38.990 CC test/nvme/fused_ordering/fused_ordering.o 00:01:38.990 CC test/nvme/fdp/fdp.o 00:01:38.990 CC test/nvme/cuse/cuse.o 00:01:39.250 LINK idxd_perf 00:01:39.250 CC test/lvol/esnap/esnap.o 00:01:39.250 LINK vhost 00:01:39.250 LINK reserve 00:01:39.250 LINK connect_stress 00:01:39.250 LINK mkfs 00:01:39.250 LINK doorbell_aers 00:01:39.250 LINK startup 00:01:39.250 CC examples/nvme/arbitration/arbitration.o 00:01:39.250 CC examples/nvme/hello_world/hello_world.o 00:01:39.250 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.250 CC examples/nvme/abort/abort.o 00:01:39.250 LINK sgl 00:01:39.250 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:39.509 CC examples/nvme/reconnect/reconnect.o 00:01:39.509 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:39.509 LINK err_injection 00:01:39.509 CC examples/nvme/hotplug/hotplug.o 00:01:39.509 LINK simple_copy 00:01:39.509 LINK fused_ordering 00:01:39.509 LINK boot_partition 00:01:39.509 LINK overhead 00:01:39.509 LINK nvme_dp 00:01:39.509 LINK aer 00:01:39.509 LINK nvme_compliance 00:01:39.509 LINK reset 00:01:39.509 CC examples/accel/perf/accel_perf.o 00:01:39.509 CC examples/blob/hello_world/hello_blob.o 00:01:39.509 LINK memory_ut 00:01:39.509 LINK fdp 00:01:39.509 CC examples/blob/cli/blobcli.o 00:01:39.509 LINK pmr_persistence 00:01:39.768 LINK dif 00:01:39.768 LINK hello_world 00:01:39.768 LINK cmb_copy 00:01:39.768 LINK hotplug 00:01:39.768 LINK hello_blob 00:01:39.768 LINK abort 00:01:39.768 LINK reconnect 00:01:39.768 LINK arbitration 00:01:40.027 LINK nvme_manage 00:01:40.027 CC test/bdev/bdevio/bdevio.o 00:01:40.027 LINK accel_perf 00:01:40.027 LINK blobcli 00:01:40.285 LINK iscsi_fuzz 00:01:40.543 CC examples/bdev/hello_world/hello_bdev.o 00:01:40.543 CC examples/bdev/bdevperf/bdevperf.o 00:01:40.543 LINK bdevio 00:01:40.801 LINK hello_bdev 00:01:40.802 LINK cuse 00:01:41.059 LINK bdevperf 00:01:41.627 CC examples/nvmf/nvmf/nvmf.o 00:01:41.885 LINK nvmf 00:01:44.416 LINK esnap 00:01:44.416 00:01:44.416 real 0m48.818s 00:01:44.416 user 10m5.037s 00:01:44.416 sys 2m27.971s 00:01:44.416 00:05:35 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:44.416 00:05:35 make -- common/autotest_common.sh@10 -- $ set +x 00:01:44.416 ************************************ 00:01:44.416 END TEST make 00:01:44.416 ************************************ 00:01:44.416 00:05:35 -- common/autotest_common.sh@1142 -- $ return 0 00:01:44.416 00:05:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.416 00:05:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:44.416 00:05:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:44.416 00:05:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.416 00:05:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.416 00:05:35 -- pm/common@44 -- $ pid=2072502 00:01:44.416 00:05:35 -- pm/common@50 -- $ kill -TERM 2072502 00:01:44.416 00:05:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.416 00:05:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.416 00:05:35 -- pm/common@44 -- $ pid=2072503 00:01:44.416 00:05:35 -- pm/common@50 -- $ kill -TERM 2072503 00:01:44.416 00:05:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.416 00:05:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.416 00:05:35 -- pm/common@44 -- $ pid=2072506 00:01:44.416 00:05:35 -- pm/common@50 -- $ kill -TERM 2072506 00:01:44.416 00:05:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.416 00:05:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.416 00:05:35 -- pm/common@44 -- $ pid=2072534 00:01:44.416 00:05:35 -- pm/common@50 -- $ sudo -E kill -TERM 2072534 00:01:44.416 00:05:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:44.416 00:05:36 -- nvmf/common.sh@7 -- # uname -s 00:01:44.416 00:05:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:44.416 00:05:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:44.416 00:05:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:44.416 00:05:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:44.416 00:05:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:44.416 00:05:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:44.416 00:05:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:44.416 00:05:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:44.416 00:05:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:44.416 00:05:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:44.416 00:05:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:44.416 00:05:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:44.416 00:05:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:44.416 00:05:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:44.416 00:05:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:44.416 00:05:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:44.416 00:05:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:44.416 00:05:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:44.416 00:05:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.416 00:05:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.416 00:05:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.416 00:05:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.416 00:05:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.416 00:05:36 -- paths/export.sh@5 -- # export PATH 00:01:44.416 00:05:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.416 00:05:36 -- nvmf/common.sh@47 -- # : 0 00:01:44.416 00:05:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:44.416 00:05:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:44.416 00:05:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:44.416 00:05:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:44.416 00:05:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:44.416 00:05:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:44.416 00:05:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:44.416 00:05:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:44.674 00:05:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:44.675 00:05:36 -- spdk/autotest.sh@32 -- # uname -s 00:01:44.675 00:05:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:44.675 00:05:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:44.675 00:05:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.675 00:05:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:44.675 00:05:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.675 00:05:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:44.675 00:05:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:44.675 00:05:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:44.675 00:05:36 -- spdk/autotest.sh@48 -- # udevadm_pid=2127989 00:01:44.675 00:05:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:44.675 00:05:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:44.675 00:05:36 -- pm/common@17 -- # local monitor 00:01:44.675 00:05:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.675 00:05:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.675 00:05:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.675 00:05:36 -- pm/common@21 -- # date +%s 00:01:44.675 00:05:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.675 00:05:36 -- pm/common@21 -- # date +%s 00:01:44.675 00:05:36 -- pm/common@25 -- # sleep 1 00:01:44.675 00:05:36 -- pm/common@21 -- # date +%s 00:01:44.675 00:05:36 -- pm/common@21 -- # date +%s 00:01:44.675 00:05:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081136 00:01:44.675 00:05:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081136 00:01:44.675 00:05:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081136 00:01:44.675 00:05:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081136 00:01:44.675 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081136_collect-vmstat.pm.log 00:01:44.675 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081136_collect-cpu-load.pm.log 00:01:44.675 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081136_collect-cpu-temp.pm.log 00:01:44.675 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081136_collect-bmc-pm.bmc.pm.log 00:01:45.609 00:05:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:45.609 00:05:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:45.609 00:05:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:45.609 00:05:37 -- common/autotest_common.sh@10 -- # set +x 00:01:45.609 00:05:37 -- spdk/autotest.sh@59 -- # create_test_list 00:01:45.609 00:05:37 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:45.609 00:05:37 -- common/autotest_common.sh@10 -- # set +x 00:01:45.609 00:05:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:45.609 00:05:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.609 00:05:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.609 00:05:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:45.609 00:05:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.609 00:05:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:45.609 00:05:37 -- common/autotest_common.sh@1455 -- # uname 00:01:45.609 00:05:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:45.609 00:05:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:45.609 00:05:37 -- common/autotest_common.sh@1475 -- # uname 00:01:45.609 00:05:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:45.609 00:05:37 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:45.609 00:05:37 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:45.609 00:05:37 -- spdk/autotest.sh@72 -- # hash lcov 00:01:45.609 00:05:37 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:45.609 00:05:37 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:45.609 --rc lcov_branch_coverage=1 00:01:45.609 --rc lcov_function_coverage=1 00:01:45.609 --rc genhtml_branch_coverage=1 00:01:45.609 --rc genhtml_function_coverage=1 00:01:45.609 --rc genhtml_legend=1 00:01:45.609 --rc geninfo_all_blocks=1 00:01:45.609 ' 00:01:45.609 00:05:37 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:45.609 --rc lcov_branch_coverage=1 00:01:45.609 --rc lcov_function_coverage=1 00:01:45.609 --rc genhtml_branch_coverage=1 00:01:45.609 --rc genhtml_function_coverage=1 00:01:45.609 --rc genhtml_legend=1 00:01:45.609 --rc geninfo_all_blocks=1 00:01:45.609 ' 00:01:45.609 00:05:37 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:45.609 --rc lcov_branch_coverage=1 00:01:45.609 --rc lcov_function_coverage=1 00:01:45.609 --rc genhtml_branch_coverage=1 00:01:45.609 --rc genhtml_function_coverage=1 00:01:45.609 --rc genhtml_legend=1 00:01:45.609 --rc geninfo_all_blocks=1 00:01:45.609 --no-external' 00:01:45.609 00:05:37 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:45.609 --rc lcov_branch_coverage=1 00:01:45.609 --rc lcov_function_coverage=1 00:01:45.609 --rc genhtml_branch_coverage=1 00:01:45.609 --rc genhtml_function_coverage=1 00:01:45.609 --rc genhtml_legend=1 00:01:45.609 --rc geninfo_all_blocks=1 00:01:45.609 --no-external' 00:01:45.609 00:05:37 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:45.609 lcov: LCOV version 1.14 00:01:45.609 00:05:37 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:47.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:47.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:47.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:47.513 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:02.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:02.378 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:20.499 00:06:10 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:20.499 00:06:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:20.499 00:06:10 -- common/autotest_common.sh@10 -- # set +x 00:02:20.499 00:06:10 -- spdk/autotest.sh@91 -- # rm -f 00:02:20.499 00:06:10 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.757 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:20.757 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:20.757 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:20.757 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:20.757 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:20.757 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.757 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:20.757 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:20.757 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:20.757 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:20.757 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:20.757 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:20.757 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:20.757 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.757 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:21.016 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:21.016 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:21.016 00:06:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:21.016 00:06:12 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:21.016 00:06:12 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:21.016 00:06:12 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:21.016 00:06:12 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:21.016 00:06:12 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:21.016 00:06:12 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:21.016 00:06:12 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:21.016 00:06:12 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:21.016 00:06:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:21.016 00:06:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:21.016 00:06:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:21.016 00:06:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:21.016 00:06:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:21.016 00:06:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:21.016 No valid GPT data, bailing 00:02:21.016 00:06:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:21.016 00:06:12 -- scripts/common.sh@391 -- # pt= 00:02:21.016 00:06:12 -- scripts/common.sh@392 -- # return 1 00:02:21.016 00:06:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:21.016 1+0 records in 00:02:21.016 1+0 records out 00:02:21.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00227224 s, 461 MB/s 00:02:21.016 00:06:12 -- spdk/autotest.sh@118 -- # sync 00:02:21.016 00:06:12 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:21.016 00:06:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:21.016 00:06:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:22.919 00:06:14 -- spdk/autotest.sh@124 -- # uname -s 00:02:22.919 00:06:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:22.919 00:06:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.919 00:06:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:22.919 00:06:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:22.919 00:06:14 -- common/autotest_common.sh@10 -- # set +x 00:02:22.919 ************************************ 00:02:22.919 START TEST setup.sh 00:02:22.919 ************************************ 00:02:22.919 00:06:14 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.919 * Looking for test storage... 00:02:22.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.919 00:06:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:22.919 00:06:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:22.919 00:06:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.919 00:06:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:22.919 00:06:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:22.919 00:06:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:22.919 ************************************ 00:02:22.919 START TEST acl 00:02:22.919 ************************************ 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.919 * Looking for test storage... 00:02:22.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.919 00:06:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.919 00:06:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:22.919 00:06:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:22.919 00:06:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:22.919 00:06:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:22.919 00:06:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:22.919 00:06:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:22.919 00:06:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:22.919 00:06:14 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:24.295 00:06:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:24.295 00:06:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:24.295 00:06:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.295 00:06:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:24.295 00:06:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:24.295 00:06:15 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:25.233 Hugepages 00:02:25.233 node hugesize free / total 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.233 00:02:25.233 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.233 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.234 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:25.493 00:06:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:25.494 00:06:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:25.494 00:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.494 00:06:16 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:25.494 00:06:16 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:25.494 00:06:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:25.494 00:06:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:25.494 00:06:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:25.494 ************************************ 00:02:25.494 START TEST denied 00:02:25.494 ************************************ 00:02:25.494 00:06:16 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:25.494 00:06:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:25.494 00:06:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:25.494 00:06:16 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:25.494 00:06:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.494 00:06:16 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:26.868 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.868 00:06:18 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.401 00:02:29.401 real 0m3.731s 00:02:29.401 user 0m1.109s 00:02:29.401 sys 0m1.710s 00:02:29.401 00:06:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:29.401 00:06:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:29.401 ************************************ 00:02:29.401 END TEST denied 00:02:29.401 ************************************ 00:02:29.401 00:06:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:29.401 00:06:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:29.401 00:06:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:29.401 00:06:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:29.401 00:06:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:29.401 ************************************ 00:02:29.401 START TEST allowed 00:02:29.401 ************************************ 00:02:29.401 00:06:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:29.401 00:06:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:29.401 00:06:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:29.401 00:06:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:29.401 00:06:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.401 00:06:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:31.939 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:31.939 00:06:23 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:31.939 00:06:23 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:31.939 00:06:23 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:31.939 00:06:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.939 00:06:23 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.875 00:02:32.875 real 0m3.682s 00:02:32.875 user 0m0.967s 00:02:32.875 sys 0m1.551s 00:02:32.875 00:06:24 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:32.875 00:06:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:32.875 ************************************ 00:02:32.875 END TEST allowed 00:02:32.875 ************************************ 00:02:32.875 00:06:24 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:32.875 00:02:32.875 real 0m10.036s 00:02:32.875 user 0m3.118s 00:02:32.875 sys 0m4.909s 00:02:32.875 00:06:24 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:32.875 00:06:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:32.875 ************************************ 00:02:32.875 END TEST acl 00:02:32.875 ************************************ 00:02:32.875 00:06:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:32.876 00:06:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:32.876 00:06:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:32.876 00:06:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:32.876 00:06:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:33.134 ************************************ 00:02:33.134 START TEST hugepages 00:02:33.134 ************************************ 00:02:33.134 00:06:24 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:33.134 * Looking for test storage... 00:02:33.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.134 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43685768 kB' 'MemAvailable: 47192052 kB' 'Buffers: 2704 kB' 'Cached: 10249076 kB' 'SwapCached: 0 kB' 'Active: 7244692 kB' 'Inactive: 3508668 kB' 'Active(anon): 6849204 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504884 kB' 'Mapped: 203260 kB' 'Shmem: 6347624 kB' 'KReclaimable: 191936 kB' 'Slab: 575416 kB' 'SReclaimable: 191936 kB' 'SUnreclaim: 383480 kB' 'KernelStack: 12928 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 7969944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.135 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.136 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:33.137 00:06:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:33.137 00:06:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:33.137 00:06:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:33.137 00:06:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:33.137 ************************************ 00:02:33.137 START TEST default_setup 00:02:33.137 ************************************ 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.137 00:06:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:34.517 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:34.517 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:34.517 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:34.517 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:34.517 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:34.517 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:34.517 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:34.517 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:34.517 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:35.476 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45780316 kB' 'MemAvailable: 49286568 kB' 'Buffers: 2704 kB' 'Cached: 10249168 kB' 'SwapCached: 0 kB' 'Active: 7263648 kB' 'Inactive: 3508668 kB' 'Active(anon): 6868160 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523656 kB' 'Mapped: 203392 kB' 'Shmem: 6347716 kB' 'KReclaimable: 191872 kB' 'Slab: 574656 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 382784 kB' 'KernelStack: 12864 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.476 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45781544 kB' 'MemAvailable: 49287796 kB' 'Buffers: 2704 kB' 'Cached: 10249172 kB' 'SwapCached: 0 kB' 'Active: 7263308 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867820 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523348 kB' 'Mapped: 203444 kB' 'Shmem: 6347720 kB' 'KReclaimable: 191872 kB' 'Slab: 574816 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 382944 kB' 'KernelStack: 12944 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.477 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45781768 kB' 'MemAvailable: 49288020 kB' 'Buffers: 2704 kB' 'Cached: 10249208 kB' 'SwapCached: 0 kB' 'Active: 7262304 kB' 'Inactive: 3508668 kB' 'Active(anon): 6866816 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522264 kB' 'Mapped: 203312 kB' 'Shmem: 6347756 kB' 'KReclaimable: 191872 kB' 'Slab: 574812 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 382940 kB' 'KernelStack: 12880 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.478 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:35.479 nr_hugepages=1024 00:02:35.479 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:35.479 resv_hugepages=0 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:35.480 surplus_hugepages=0 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:35.480 anon_hugepages=0 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45782472 kB' 'MemAvailable: 49288724 kB' 'Buffers: 2704 kB' 'Cached: 10249212 kB' 'SwapCached: 0 kB' 'Active: 7262704 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867216 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522624 kB' 'Mapped: 203312 kB' 'Shmem: 6347760 kB' 'KReclaimable: 191872 kB' 'Slab: 574812 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 382940 kB' 'KernelStack: 12896 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.480 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21711232 kB' 'MemUsed: 11165708 kB' 'SwapCached: 0 kB' 'Active: 4809164 kB' 'Inactive: 3265856 kB' 'Active(anon): 4619696 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7765284 kB' 'Mapped: 74484 kB' 'AnonPages: 312900 kB' 'Shmem: 4309960 kB' 'KernelStack: 7896 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116964 kB' 'Slab: 315736 kB' 'SReclaimable: 116964 kB' 'SUnreclaim: 198772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:35.481 node0=1024 expecting 1024 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:35.481 00:02:35.481 real 0m2.422s 00:02:35.481 user 0m0.610s 00:02:35.481 sys 0m0.891s 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:35.481 00:06:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:35.481 ************************************ 00:02:35.481 END TEST default_setup 00:02:35.481 ************************************ 00:02:35.481 00:06:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:35.481 00:06:27 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:35.481 00:06:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:35.481 00:06:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:35.481 00:06:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:35.751 ************************************ 00:02:35.751 START TEST per_node_1G_alloc 00:02:35.751 ************************************ 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.751 00:06:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.686 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.686 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:36.686 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.686 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.686 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.686 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.686 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.686 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.686 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.686 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.686 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.686 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.686 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.686 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.687 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.687 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.687 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.954 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45797224 kB' 'MemAvailable: 49303476 kB' 'Buffers: 2704 kB' 'Cached: 10249280 kB' 'SwapCached: 0 kB' 'Active: 7262940 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867452 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522812 kB' 'Mapped: 203388 kB' 'Shmem: 6347828 kB' 'KReclaimable: 191872 kB' 'Slab: 574888 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 383016 kB' 'KernelStack: 12864 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45799648 kB' 'MemAvailable: 49305900 kB' 'Buffers: 2704 kB' 'Cached: 10249284 kB' 'SwapCached: 0 kB' 'Active: 7262888 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867400 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522752 kB' 'Mapped: 203324 kB' 'Shmem: 6347832 kB' 'KReclaimable: 191872 kB' 'Slab: 574836 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 382964 kB' 'KernelStack: 12912 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.955 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.956 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45798172 kB' 'MemAvailable: 49304424 kB' 'Buffers: 2704 kB' 'Cached: 10249300 kB' 'SwapCached: 0 kB' 'Active: 7262852 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867364 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522696 kB' 'Mapped: 203324 kB' 'Shmem: 6347848 kB' 'KReclaimable: 191872 kB' 'Slab: 574912 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 383040 kB' 'KernelStack: 12896 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.957 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.958 nr_hugepages=1024 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.958 resv_hugepages=0 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.958 surplus_hugepages=0 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.958 anon_hugepages=0 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45797920 kB' 'MemAvailable: 49304172 kB' 'Buffers: 2704 kB' 'Cached: 10249324 kB' 'SwapCached: 0 kB' 'Active: 7262916 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867428 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522736 kB' 'Mapped: 203324 kB' 'Shmem: 6347872 kB' 'KReclaimable: 191872 kB' 'Slab: 574912 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 383040 kB' 'KernelStack: 12912 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.958 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22776280 kB' 'MemUsed: 10100660 kB' 'SwapCached: 0 kB' 'Active: 4809172 kB' 'Inactive: 3265856 kB' 'Active(anon): 4619704 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7765396 kB' 'Mapped: 74436 kB' 'AnonPages: 312756 kB' 'Shmem: 4310072 kB' 'KernelStack: 7896 kB' 'PageTables: 4832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116964 kB' 'Slab: 315812 kB' 'SReclaimable: 116964 kB' 'SUnreclaim: 198848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.959 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.960 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23023208 kB' 'MemUsed: 4641544 kB' 'SwapCached: 0 kB' 'Active: 2453552 kB' 'Inactive: 242812 kB' 'Active(anon): 2247532 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2486656 kB' 'Mapped: 128828 kB' 'AnonPages: 209756 kB' 'Shmem: 2037824 kB' 'KernelStack: 4968 kB' 'PageTables: 3124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74908 kB' 'Slab: 259100 kB' 'SReclaimable: 74908 kB' 'SUnreclaim: 184192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:36.961 node0=512 expecting 512 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:36.961 node1=512 expecting 512 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:36.961 00:02:36.961 real 0m1.424s 00:02:36.961 user 0m0.612s 00:02:36.961 sys 0m0.764s 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:36.961 00:06:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:36.961 ************************************ 00:02:36.961 END TEST per_node_1G_alloc 00:02:36.961 ************************************ 00:02:36.961 00:06:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:36.961 00:06:28 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:36.961 00:06:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:36.961 00:06:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:36.961 00:06:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:36.961 ************************************ 00:02:36.961 START TEST even_2G_alloc 00:02:36.961 ************************************ 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.961 00:06:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.345 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.345 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:38.345 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.345 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.345 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.345 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.345 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.345 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.345 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.345 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.345 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.345 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.345 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.345 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.345 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.345 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.345 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45814460 kB' 'MemAvailable: 49320712 kB' 'Buffers: 2704 kB' 'Cached: 10249424 kB' 'SwapCached: 0 kB' 'Active: 7263280 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867792 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523068 kB' 'Mapped: 203412 kB' 'Shmem: 6347972 kB' 'KReclaimable: 191872 kB' 'Slab: 574932 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 383060 kB' 'KernelStack: 12896 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.345 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45814208 kB' 'MemAvailable: 49320460 kB' 'Buffers: 2704 kB' 'Cached: 10249424 kB' 'SwapCached: 0 kB' 'Active: 7262956 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867468 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522732 kB' 'Mapped: 203348 kB' 'Shmem: 6347972 kB' 'KReclaimable: 191872 kB' 'Slab: 574932 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 383060 kB' 'KernelStack: 12928 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.346 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.347 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45814592 kB' 'MemAvailable: 49320844 kB' 'Buffers: 2704 kB' 'Cached: 10249444 kB' 'SwapCached: 0 kB' 'Active: 7263132 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867644 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522888 kB' 'Mapped: 203348 kB' 'Shmem: 6347992 kB' 'KReclaimable: 191872 kB' 'Slab: 574944 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 383072 kB' 'KernelStack: 12944 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.348 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.349 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.350 nr_hugepages=1024 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.350 resv_hugepages=0 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.350 surplus_hugepages=0 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.350 anon_hugepages=0 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45814788 kB' 'MemAvailable: 49321040 kB' 'Buffers: 2704 kB' 'Cached: 10249468 kB' 'SwapCached: 0 kB' 'Active: 7263112 kB' 'Inactive: 3508668 kB' 'Active(anon): 6867624 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522888 kB' 'Mapped: 203348 kB' 'Shmem: 6348016 kB' 'KReclaimable: 191872 kB' 'Slab: 574944 kB' 'SReclaimable: 191872 kB' 'SUnreclaim: 383072 kB' 'KernelStack: 12944 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7991700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.350 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.351 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22791148 kB' 'MemUsed: 10085792 kB' 'SwapCached: 0 kB' 'Active: 4808912 kB' 'Inactive: 3265856 kB' 'Active(anon): 4619444 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7765520 kB' 'Mapped: 74520 kB' 'AnonPages: 312400 kB' 'Shmem: 4310196 kB' 'KernelStack: 7896 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116964 kB' 'Slab: 315792 kB' 'SReclaimable: 116964 kB' 'SUnreclaim: 198828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.352 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.353 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.613 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23023716 kB' 'MemUsed: 4641036 kB' 'SwapCached: 0 kB' 'Active: 2454268 kB' 'Inactive: 242812 kB' 'Active(anon): 2248248 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2486672 kB' 'Mapped: 128828 kB' 'AnonPages: 210496 kB' 'Shmem: 2037840 kB' 'KernelStack: 5048 kB' 'PageTables: 3336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74908 kB' 'Slab: 259152 kB' 'SReclaimable: 74908 kB' 'SUnreclaim: 184244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.614 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.615 node0=512 expecting 512 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:38.615 node1=512 expecting 512 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:38.615 00:02:38.615 real 0m1.422s 00:02:38.615 user 0m0.576s 00:02:38.615 sys 0m0.803s 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:38.615 00:06:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.615 ************************************ 00:02:38.615 END TEST even_2G_alloc 00:02:38.615 ************************************ 00:02:38.615 00:06:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:38.615 00:06:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:38.615 00:06:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.615 00:06:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.615 00:06:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:38.615 ************************************ 00:02:38.615 START TEST odd_alloc 00:02:38.615 ************************************ 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.615 00:06:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:39.553 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.553 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.553 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.553 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.553 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.553 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.553 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.553 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.553 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.553 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.553 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.553 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.553 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.553 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.553 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.553 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.553 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45830372 kB' 'MemAvailable: 49336592 kB' 'Buffers: 2704 kB' 'Cached: 10249556 kB' 'SwapCached: 0 kB' 'Active: 7260244 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864756 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519960 kB' 'Mapped: 202548 kB' 'Shmem: 6348104 kB' 'KReclaimable: 191808 kB' 'Slab: 574696 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382888 kB' 'KernelStack: 12896 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7977424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.817 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.818 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45834980 kB' 'MemAvailable: 49341200 kB' 'Buffers: 2704 kB' 'Cached: 10249560 kB' 'SwapCached: 0 kB' 'Active: 7260424 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864936 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520120 kB' 'Mapped: 202516 kB' 'Shmem: 6348108 kB' 'KReclaimable: 191808 kB' 'Slab: 574680 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382872 kB' 'KernelStack: 13008 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7978804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.819 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45833812 kB' 'MemAvailable: 49340032 kB' 'Buffers: 2704 kB' 'Cached: 10249572 kB' 'SwapCached: 0 kB' 'Active: 7261044 kB' 'Inactive: 3508668 kB' 'Active(anon): 6865556 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520724 kB' 'Mapped: 202440 kB' 'Shmem: 6348120 kB' 'KReclaimable: 191808 kB' 'Slab: 574680 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382872 kB' 'KernelStack: 13344 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7978824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.820 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.821 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:39.822 nr_hugepages=1025 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.822 resv_hugepages=0 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.822 surplus_hugepages=0 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.822 anon_hugepages=0 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45834412 kB' 'MemAvailable: 49340632 kB' 'Buffers: 2704 kB' 'Cached: 10249596 kB' 'SwapCached: 0 kB' 'Active: 7261016 kB' 'Inactive: 3508668 kB' 'Active(anon): 6865528 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520644 kB' 'Mapped: 202440 kB' 'Shmem: 6348144 kB' 'KReclaimable: 191808 kB' 'Slab: 574672 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382864 kB' 'KernelStack: 13104 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7978844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.822 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.823 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22802900 kB' 'MemUsed: 10074040 kB' 'SwapCached: 0 kB' 'Active: 4806344 kB' 'Inactive: 3265856 kB' 'Active(anon): 4616876 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7765576 kB' 'Mapped: 73800 kB' 'AnonPages: 309788 kB' 'Shmem: 4310252 kB' 'KernelStack: 8296 kB' 'PageTables: 6016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116900 kB' 'Slab: 315568 kB' 'SReclaimable: 116900 kB' 'SUnreclaim: 198668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.824 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.825 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23027976 kB' 'MemUsed: 4636776 kB' 'SwapCached: 0 kB' 'Active: 2454720 kB' 'Inactive: 242812 kB' 'Active(anon): 2248700 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2486768 kB' 'Mapped: 128640 kB' 'AnonPages: 210872 kB' 'Shmem: 2037936 kB' 'KernelStack: 5032 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74908 kB' 'Slab: 259104 kB' 'SReclaimable: 74908 kB' 'SUnreclaim: 184196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.085 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:40.086 node0=512 expecting 513 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:40.086 node1=513 expecting 512 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:40.086 00:02:40.086 real 0m1.427s 00:02:40.086 user 0m0.664s 00:02:40.086 sys 0m0.723s 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:40.086 00:06:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.086 ************************************ 00:02:40.086 END TEST odd_alloc 00:02:40.086 ************************************ 00:02:40.086 00:06:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:40.086 00:06:31 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:40.086 00:06:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:40.086 00:06:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.086 00:06:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.086 ************************************ 00:02:40.086 START TEST custom_alloc 00:02:40.086 ************************************ 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.086 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.087 00:06:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.025 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.025 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.025 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.025 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.025 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.025 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.025 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.025 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.025 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.025 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.025 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.025 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.025 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.025 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.025 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.025 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.025 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.290 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44792180 kB' 'MemAvailable: 48298400 kB' 'Buffers: 2704 kB' 'Cached: 10249684 kB' 'SwapCached: 0 kB' 'Active: 7260084 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864596 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519664 kB' 'Mapped: 202712 kB' 'Shmem: 6348232 kB' 'KReclaimable: 191808 kB' 'Slab: 574692 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382884 kB' 'KernelStack: 12944 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7976680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.291 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44792432 kB' 'MemAvailable: 48298652 kB' 'Buffers: 2704 kB' 'Cached: 10249684 kB' 'SwapCached: 0 kB' 'Active: 7260504 kB' 'Inactive: 3508668 kB' 'Active(anon): 6865016 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520016 kB' 'Mapped: 202648 kB' 'Shmem: 6348232 kB' 'KReclaimable: 191808 kB' 'Slab: 574692 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382884 kB' 'KernelStack: 12960 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7976700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.292 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.293 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44792432 kB' 'MemAvailable: 48298652 kB' 'Buffers: 2704 kB' 'Cached: 10249684 kB' 'SwapCached: 0 kB' 'Active: 7259940 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864452 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519532 kB' 'Mapped: 202648 kB' 'Shmem: 6348232 kB' 'KReclaimable: 191808 kB' 'Slab: 574692 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382884 kB' 'KernelStack: 12992 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7976352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.294 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.295 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:41.296 nr_hugepages=1536 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.296 resv_hugepages=0 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.296 surplus_hugepages=0 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.296 anon_hugepages=0 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44791688 kB' 'MemAvailable: 48297908 kB' 'Buffers: 2704 kB' 'Cached: 10249728 kB' 'SwapCached: 0 kB' 'Active: 7259844 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864356 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519364 kB' 'Mapped: 202648 kB' 'Shmem: 6348276 kB' 'KReclaimable: 191808 kB' 'Slab: 574700 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382892 kB' 'KernelStack: 12880 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7976376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.296 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:41.297 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22804224 kB' 'MemUsed: 10072716 kB' 'SwapCached: 0 kB' 'Active: 4805568 kB' 'Inactive: 3265856 kB' 'Active(anon): 4616100 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7765580 kB' 'Mapped: 73964 kB' 'AnonPages: 308976 kB' 'Shmem: 4310256 kB' 'KernelStack: 7880 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116900 kB' 'Slab: 315468 kB' 'SReclaimable: 116900 kB' 'SUnreclaim: 198568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.298 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 21986708 kB' 'MemUsed: 5678044 kB' 'SwapCached: 0 kB' 'Active: 2454600 kB' 'Inactive: 242812 kB' 'Active(anon): 2248580 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242812 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2486892 kB' 'Mapped: 128684 kB' 'AnonPages: 210752 kB' 'Shmem: 2038060 kB' 'KernelStack: 5016 kB' 'PageTables: 3352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74908 kB' 'Slab: 259232 kB' 'SReclaimable: 74908 kB' 'SUnreclaim: 184324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.299 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.300 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:41.301 node0=512 expecting 512 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:41.301 node1=1024 expecting 1024 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:41.301 00:02:41.301 real 0m1.392s 00:02:41.301 user 0m0.593s 00:02:41.301 sys 0m0.743s 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.301 00:06:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:41.301 ************************************ 00:02:41.301 END TEST custom_alloc 00:02:41.301 ************************************ 00:02:41.561 00:06:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:41.561 00:06:32 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:41.561 00:06:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.561 00:06:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.561 00:06:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.561 ************************************ 00:02:41.561 START TEST no_shrink_alloc 00:02:41.561 ************************************ 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.561 00:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:42.493 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.493 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:42.493 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.493 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.493 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.493 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.493 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.493 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.493 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.493 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.493 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.493 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.493 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.493 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.493 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.493 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.493 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45820580 kB' 'MemAvailable: 49326800 kB' 'Buffers: 2704 kB' 'Cached: 10249816 kB' 'SwapCached: 0 kB' 'Active: 7260960 kB' 'Inactive: 3508668 kB' 'Active(anon): 6865472 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520312 kB' 'Mapped: 202628 kB' 'Shmem: 6348364 kB' 'KReclaimable: 191808 kB' 'Slab: 574660 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382852 kB' 'KernelStack: 12880 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.757 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.758 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45823920 kB' 'MemAvailable: 49330140 kB' 'Buffers: 2704 kB' 'Cached: 10249820 kB' 'SwapCached: 0 kB' 'Active: 7260488 kB' 'Inactive: 3508668 kB' 'Active(anon): 6865000 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519828 kB' 'Mapped: 202564 kB' 'Shmem: 6348368 kB' 'KReclaimable: 191808 kB' 'Slab: 574660 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382852 kB' 'KernelStack: 12912 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.759 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.760 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.761 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45824232 kB' 'MemAvailable: 49330452 kB' 'Buffers: 2704 kB' 'Cached: 10249836 kB' 'SwapCached: 0 kB' 'Active: 7260216 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864728 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519560 kB' 'Mapped: 202436 kB' 'Shmem: 6348384 kB' 'KReclaimable: 191808 kB' 'Slab: 574720 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382912 kB' 'KernelStack: 12928 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.762 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.763 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:42.764 nr_hugepages=1024 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.764 resv_hugepages=0 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.764 surplus_hugepages=0 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.764 anon_hugepages=0 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45823476 kB' 'MemAvailable: 49329696 kB' 'Buffers: 2704 kB' 'Cached: 10249840 kB' 'SwapCached: 0 kB' 'Active: 7259936 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864448 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519276 kB' 'Mapped: 202436 kB' 'Shmem: 6348388 kB' 'KReclaimable: 191808 kB' 'Slab: 574720 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382912 kB' 'KernelStack: 12928 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.764 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.765 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.766 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21750496 kB' 'MemUsed: 11126444 kB' 'SwapCached: 0 kB' 'Active: 4805892 kB' 'Inactive: 3265856 kB' 'Active(anon): 4616424 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7765588 kB' 'Mapped: 73828 kB' 'AnonPages: 309296 kB' 'Shmem: 4310264 kB' 'KernelStack: 7880 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116900 kB' 'Slab: 315440 kB' 'SReclaimable: 116900 kB' 'SUnreclaim: 198540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.767 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:42.768 node0=1024 expecting 1024 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.768 00:06:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.153 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.153 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.153 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.153 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.153 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.153 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.153 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.153 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.153 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.153 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.153 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.153 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.153 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.153 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.153 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.153 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.153 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.153 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45806532 kB' 'MemAvailable: 49312752 kB' 'Buffers: 2704 kB' 'Cached: 10249928 kB' 'SwapCached: 0 kB' 'Active: 7261176 kB' 'Inactive: 3508668 kB' 'Active(anon): 6865688 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520452 kB' 'Mapped: 202468 kB' 'Shmem: 6348476 kB' 'KReclaimable: 191808 kB' 'Slab: 574776 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382968 kB' 'KernelStack: 12928 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196804 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.153 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.154 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812172 kB' 'MemAvailable: 49318392 kB' 'Buffers: 2704 kB' 'Cached: 10249928 kB' 'SwapCached: 0 kB' 'Active: 7260744 kB' 'Inactive: 3508668 kB' 'Active(anon): 6865256 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520064 kB' 'Mapped: 202448 kB' 'Shmem: 6348476 kB' 'KReclaimable: 191808 kB' 'Slab: 574720 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382912 kB' 'KernelStack: 12928 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.155 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.156 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812412 kB' 'MemAvailable: 49318632 kB' 'Buffers: 2704 kB' 'Cached: 10249948 kB' 'SwapCached: 0 kB' 'Active: 7260428 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864940 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519628 kB' 'Mapped: 202448 kB' 'Shmem: 6348496 kB' 'KReclaimable: 191808 kB' 'Slab: 574784 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382976 kB' 'KernelStack: 12928 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.157 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.158 nr_hugepages=1024 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.158 resv_hugepages=0 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.158 surplus_hugepages=0 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.158 anon_hugepages=0 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.158 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812720 kB' 'MemAvailable: 49318940 kB' 'Buffers: 2704 kB' 'Cached: 10249968 kB' 'SwapCached: 0 kB' 'Active: 7260448 kB' 'Inactive: 3508668 kB' 'Active(anon): 6864960 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519628 kB' 'Mapped: 202448 kB' 'Shmem: 6348516 kB' 'KReclaimable: 191808 kB' 'Slab: 574784 kB' 'SReclaimable: 191808 kB' 'SUnreclaim: 382976 kB' 'KernelStack: 12928 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7977312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2453084 kB' 'DirectMap2M: 20535296 kB' 'DirectMap1G: 46137344 kB' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.159 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21741400 kB' 'MemUsed: 11135540 kB' 'SwapCached: 0 kB' 'Active: 4806428 kB' 'Inactive: 3265856 kB' 'Active(anon): 4616960 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7765596 kB' 'Mapped: 73840 kB' 'AnonPages: 309796 kB' 'Shmem: 4310272 kB' 'KernelStack: 7912 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116900 kB' 'Slab: 315652 kB' 'SReclaimable: 116900 kB' 'SUnreclaim: 198752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.160 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.161 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:44.162 node0=1024 expecting 1024 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:44.162 00:02:44.162 real 0m2.778s 00:02:44.162 user 0m1.166s 00:02:44.162 sys 0m1.525s 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.162 00:06:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:44.162 ************************************ 00:02:44.162 END TEST no_shrink_alloc 00:02:44.162 ************************************ 00:02:44.162 00:06:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:44.162 00:06:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:44.162 00:02:44.162 real 0m11.253s 00:02:44.162 user 0m4.391s 00:02:44.162 sys 0m5.690s 00:02:44.162 00:06:35 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.162 00:06:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:44.162 ************************************ 00:02:44.162 END TEST hugepages 00:02:44.162 ************************************ 00:02:44.420 00:06:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:44.420 00:06:35 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:44.420 00:06:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.420 00:06:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.420 00:06:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:44.420 ************************************ 00:02:44.420 START TEST driver 00:02:44.420 ************************************ 00:02:44.420 00:06:35 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:44.420 * Looking for test storage... 00:02:44.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:44.420 00:06:35 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:44.420 00:06:35 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.420 00:06:35 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.959 00:06:38 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:46.959 00:06:38 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.959 00:06:38 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.959 00:06:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:46.959 ************************************ 00:02:46.959 START TEST guess_driver 00:02:46.959 ************************************ 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:46.959 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:46.959 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:46.959 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:46.959 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:46.959 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:46.959 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:46.959 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:46.959 Looking for driver=vfio-pci 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.959 00:06:38 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.900 00:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:48.839 00:06:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:48.839 00:06:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:48.839 00:06:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.100 00:06:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:49.100 00:06:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:49.100 00:06:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.100 00:06:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.637 00:02:51.637 real 0m4.568s 00:02:51.637 user 0m1.008s 00:02:51.637 sys 0m1.670s 00:02:51.637 00:06:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:51.637 00:06:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:51.637 ************************************ 00:02:51.637 END TEST guess_driver 00:02:51.637 ************************************ 00:02:51.637 00:06:42 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:02:51.637 00:02:51.637 real 0m7.070s 00:02:51.637 user 0m1.588s 00:02:51.637 sys 0m2.617s 00:02:51.637 00:06:42 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:51.637 00:06:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:51.637 ************************************ 00:02:51.637 END TEST driver 00:02:51.637 ************************************ 00:02:51.637 00:06:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:51.637 00:06:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:51.637 00:06:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.637 00:06:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.637 00:06:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:51.637 ************************************ 00:02:51.637 START TEST devices 00:02:51.637 ************************************ 00:02:51.637 00:06:42 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:51.637 * Looking for test storage... 00:02:51.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.637 00:06:42 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:51.637 00:06:42 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:51.637 00:06:42 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.637 00:06:42 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:53.015 00:06:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:53.015 No valid GPT data, bailing 00:02:53.015 00:06:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:53.015 00:06:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:53.015 00:06:44 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:53.015 00:06:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.015 00:06:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:53.015 ************************************ 00:02:53.015 START TEST nvme_mount 00:02:53.015 ************************************ 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:53.015 00:06:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:53.952 Creating new GPT entries in memory. 00:02:53.952 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:53.952 other utilities. 00:02:53.952 00:06:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:53.952 00:06:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:53.952 00:06:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:53.952 00:06:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:53.952 00:06:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:55.331 Creating new GPT entries in memory. 00:02:55.331 The operation has completed successfully. 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2148553 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.331 00:06:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.263 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.264 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:56.523 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:56.523 00:06:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:56.781 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:56.781 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:56.781 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:56.781 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.781 00:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.712 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.713 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.971 00:06:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.910 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:59.169 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:59.169 00:02:59.169 real 0m6.166s 00:02:59.169 user 0m1.442s 00:02:59.169 sys 0m2.263s 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.169 00:06:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:02:59.169 ************************************ 00:02:59.169 END TEST nvme_mount 00:02:59.169 ************************************ 00:02:59.169 00:06:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:02:59.169 00:06:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:59.169 00:06:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.169 00:06:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.169 00:06:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:59.169 ************************************ 00:02:59.169 START TEST dm_mount 00:02:59.169 ************************************ 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:59.169 00:06:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:00.547 Creating new GPT entries in memory. 00:03:00.547 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:00.547 other utilities. 00:03:00.547 00:06:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:00.547 00:06:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:00.547 00:06:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:00.547 00:06:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:00.547 00:06:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:01.484 Creating new GPT entries in memory. 00:03:01.484 The operation has completed successfully. 00:03:01.484 00:06:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:01.484 00:06:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.484 00:06:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:01.484 00:06:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:01.484 00:06:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:02.420 The operation has completed successfully. 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2150938 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:02.420 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.421 00:06:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.358 00:06:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.617 00:06:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.553 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:04.813 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:04.813 00:03:04.813 real 0m5.580s 00:03:04.813 user 0m0.931s 00:03:04.813 sys 0m1.466s 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.813 00:06:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:04.813 ************************************ 00:03:04.813 END TEST dm_mount 00:03:04.813 ************************************ 00:03:04.813 00:06:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:04.813 00:06:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:04.813 00:06:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:04.813 00:06:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.813 00:06:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.813 00:06:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:04.813 00:06:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.813 00:06:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:05.072 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:05.072 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:05.072 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:05.072 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:05.072 00:06:56 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:05.072 00:06:56 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.072 00:06:56 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:05.072 00:06:56 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:05.072 00:06:56 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:05.072 00:06:56 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:05.072 00:06:56 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:05.072 00:03:05.072 real 0m13.694s 00:03:05.072 user 0m3.012s 00:03:05.072 sys 0m4.793s 00:03:05.072 00:06:56 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.072 00:06:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:05.072 ************************************ 00:03:05.072 END TEST devices 00:03:05.072 ************************************ 00:03:05.072 00:06:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:05.072 00:03:05.072 real 0m42.286s 00:03:05.072 user 0m12.201s 00:03:05.072 sys 0m18.165s 00:03:05.072 00:06:56 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.072 00:06:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.072 ************************************ 00:03:05.072 END TEST setup.sh 00:03:05.072 ************************************ 00:03:05.072 00:06:56 -- common/autotest_common.sh@1142 -- # return 0 00:03:05.072 00:06:56 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.447 Hugepages 00:03:06.447 node hugesize free / total 00:03:06.447 node0 1048576kB 0 / 0 00:03:06.447 node0 2048kB 2048 / 2048 00:03:06.447 node1 1048576kB 0 / 0 00:03:06.447 node1 2048kB 0 / 0 00:03:06.447 00:03:06.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.447 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:06.447 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:06.447 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:06.447 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:06.447 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:06.447 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:06.447 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:06.447 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:06.447 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:06.447 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:06.447 00:06:57 -- spdk/autotest.sh@130 -- # uname -s 00:03:06.447 00:06:57 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:06.447 00:06:57 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:06.447 00:06:57 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.384 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.384 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.384 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.384 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.644 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.644 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.644 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.644 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.644 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.612 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.612 00:07:00 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:09.575 00:07:01 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:09.575 00:07:01 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:09.575 00:07:01 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:09.575 00:07:01 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:09.575 00:07:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:09.575 00:07:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:09.575 00:07:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:09.575 00:07:01 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:09.575 00:07:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:09.835 00:07:01 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:09.835 00:07:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:09.835 00:07:01 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.773 Waiting for block devices as requested 00:03:10.773 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:11.032 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:11.032 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:11.032 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:11.290 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:11.290 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:11.290 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:11.290 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:11.290 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:11.549 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:11.549 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:11.549 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:11.807 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:11.807 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:11.807 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:11.807 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:12.067 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:12.067 00:07:03 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:12.067 00:07:03 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:12.067 00:07:03 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:12.067 00:07:03 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:12.067 00:07:03 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:12.067 00:07:03 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:12.067 00:07:03 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:12.067 00:07:03 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:12.067 00:07:03 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:12.067 00:07:03 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:12.067 00:07:03 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:12.067 00:07:03 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:12.067 00:07:03 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:12.067 00:07:03 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:12.067 00:07:03 -- common/autotest_common.sh@1557 -- # continue 00:03:12.067 00:07:03 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:12.067 00:07:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:12.067 00:07:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.067 00:07:03 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:12.067 00:07:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.067 00:07:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.067 00:07:03 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.446 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.446 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.446 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.446 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.446 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.446 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.446 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.446 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.446 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.386 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.386 00:07:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:14.386 00:07:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:14.386 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:03:14.386 00:07:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:14.386 00:07:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:14.386 00:07:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:14.386 00:07:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:14.386 00:07:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:14.386 00:07:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:14.386 00:07:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:14.386 00:07:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:14.386 00:07:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:14.386 00:07:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:14.386 00:07:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:14.645 00:07:06 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:14.645 00:07:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:14.645 00:07:06 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:14.645 00:07:06 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:14.645 00:07:06 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:14.645 00:07:06 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:14.645 00:07:06 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:14.645 00:07:06 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:14.645 00:07:06 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:14.645 00:07:06 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2156107 00:03:14.645 00:07:06 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.645 00:07:06 -- common/autotest_common.sh@1598 -- # waitforlisten 2156107 00:03:14.645 00:07:06 -- common/autotest_common.sh@829 -- # '[' -z 2156107 ']' 00:03:14.645 00:07:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:14.645 00:07:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:14.645 00:07:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:14.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:14.645 00:07:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:14.645 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:03:14.645 [2024-07-16 00:07:06.068374] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:14.645 [2024-07-16 00:07:06.068476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156107 ] 00:03:14.645 EAL: No free 2048 kB hugepages reported on node 1 00:03:14.645 [2024-07-16 00:07:06.127984] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:14.645 [2024-07-16 00:07:06.237448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:14.904 00:07:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:14.904 00:07:06 -- common/autotest_common.sh@862 -- # return 0 00:03:14.904 00:07:06 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:14.904 00:07:06 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:14.904 00:07:06 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:18.195 nvme0n1 00:03:18.195 00:07:09 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:18.455 [2024-07-16 00:07:09.820136] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:18.455 [2024-07-16 00:07:09.820200] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:18.455 request: 00:03:18.455 { 00:03:18.455 "nvme_ctrlr_name": "nvme0", 00:03:18.455 "password": "test", 00:03:18.455 "method": "bdev_nvme_opal_revert", 00:03:18.455 "req_id": 1 00:03:18.455 } 00:03:18.455 Got JSON-RPC error response 00:03:18.455 response: 00:03:18.455 { 00:03:18.455 "code": -32603, 00:03:18.455 "message": "Internal error" 00:03:18.455 } 00:03:18.455 00:07:09 -- common/autotest_common.sh@1604 -- # true 00:03:18.455 00:07:09 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:18.455 00:07:09 -- common/autotest_common.sh@1608 -- # killprocess 2156107 00:03:18.455 00:07:09 -- common/autotest_common.sh@948 -- # '[' -z 2156107 ']' 00:03:18.455 00:07:09 -- common/autotest_common.sh@952 -- # kill -0 2156107 00:03:18.455 00:07:09 -- common/autotest_common.sh@953 -- # uname 00:03:18.455 00:07:09 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:18.456 00:07:09 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2156107 00:03:18.456 00:07:09 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:18.456 00:07:09 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:18.456 00:07:09 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2156107' 00:03:18.456 killing process with pid 2156107 00:03:18.456 00:07:09 -- common/autotest_common.sh@967 -- # kill 2156107 00:03:18.456 00:07:09 -- common/autotest_common.sh@972 -- # wait 2156107 00:03:20.360 00:07:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:20.360 00:07:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:20.360 00:07:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:20.360 00:07:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:20.360 00:07:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:20.360 00:07:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.360 00:07:11 -- common/autotest_common.sh@10 -- # set +x 00:03:20.360 00:07:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:20.360 00:07:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.360 00:07:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.360 00:07:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.360 00:07:11 -- common/autotest_common.sh@10 -- # set +x 00:03:20.360 ************************************ 00:03:20.360 START TEST env 00:03:20.360 ************************************ 00:03:20.360 00:07:11 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.360 * Looking for test storage... 00:03:20.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:20.360 00:07:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.360 00:07:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.360 00:07:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.360 00:07:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.360 ************************************ 00:03:20.360 START TEST env_memory 00:03:20.360 ************************************ 00:03:20.360 00:07:11 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.360 00:03:20.360 00:03:20.360 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.361 http://cunit.sourceforge.net/ 00:03:20.361 00:03:20.361 00:03:20.361 Suite: memory 00:03:20.361 Test: alloc and free memory map ...[2024-07-16 00:07:11.806490] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:20.361 passed 00:03:20.361 Test: mem map translation ...[2024-07-16 00:07:11.827046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:20.361 [2024-07-16 00:07:11.827068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:20.361 [2024-07-16 00:07:11.827111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:20.361 [2024-07-16 00:07:11.827123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:20.361 passed 00:03:20.361 Test: mem map registration ...[2024-07-16 00:07:11.869265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:20.361 [2024-07-16 00:07:11.869284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:20.361 passed 00:03:20.361 Test: mem map adjacent registrations ...passed 00:03:20.361 00:03:20.361 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.361 suites 1 1 n/a 0 0 00:03:20.361 tests 4 4 4 0 0 00:03:20.361 asserts 152 152 152 0 n/a 00:03:20.361 00:03:20.361 Elapsed time = 0.144 seconds 00:03:20.361 00:03:20.361 real 0m0.152s 00:03:20.361 user 0m0.144s 00:03:20.361 sys 0m0.007s 00:03:20.361 00:07:11 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.361 00:07:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:20.361 ************************************ 00:03:20.361 END TEST env_memory 00:03:20.361 ************************************ 00:03:20.361 00:07:11 env -- common/autotest_common.sh@1142 -- # return 0 00:03:20.361 00:07:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.361 00:07:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.361 00:07:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.361 00:07:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.361 ************************************ 00:03:20.361 START TEST env_vtophys 00:03:20.361 ************************************ 00:03:20.361 00:07:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.621 EAL: lib.eal log level changed from notice to debug 00:03:20.621 EAL: Detected lcore 0 as core 0 on socket 0 00:03:20.621 EAL: Detected lcore 1 as core 1 on socket 0 00:03:20.621 EAL: Detected lcore 2 as core 2 on socket 0 00:03:20.621 EAL: Detected lcore 3 as core 3 on socket 0 00:03:20.621 EAL: Detected lcore 4 as core 4 on socket 0 00:03:20.621 EAL: Detected lcore 5 as core 5 on socket 0 00:03:20.621 EAL: Detected lcore 6 as core 8 on socket 0 00:03:20.621 EAL: Detected lcore 7 as core 9 on socket 0 00:03:20.621 EAL: Detected lcore 8 as core 10 on socket 0 00:03:20.621 EAL: Detected lcore 9 as core 11 on socket 0 00:03:20.621 EAL: Detected lcore 10 as core 12 on socket 0 00:03:20.621 EAL: Detected lcore 11 as core 13 on socket 0 00:03:20.621 EAL: Detected lcore 12 as core 0 on socket 1 00:03:20.621 EAL: Detected lcore 13 as core 1 on socket 1 00:03:20.621 EAL: Detected lcore 14 as core 2 on socket 1 00:03:20.621 EAL: Detected lcore 15 as core 3 on socket 1 00:03:20.621 EAL: Detected lcore 16 as core 4 on socket 1 00:03:20.621 EAL: Detected lcore 17 as core 5 on socket 1 00:03:20.621 EAL: Detected lcore 18 as core 8 on socket 1 00:03:20.621 EAL: Detected lcore 19 as core 9 on socket 1 00:03:20.621 EAL: Detected lcore 20 as core 10 on socket 1 00:03:20.621 EAL: Detected lcore 21 as core 11 on socket 1 00:03:20.621 EAL: Detected lcore 22 as core 12 on socket 1 00:03:20.621 EAL: Detected lcore 23 as core 13 on socket 1 00:03:20.621 EAL: Detected lcore 24 as core 0 on socket 0 00:03:20.621 EAL: Detected lcore 25 as core 1 on socket 0 00:03:20.621 EAL: Detected lcore 26 as core 2 on socket 0 00:03:20.621 EAL: Detected lcore 27 as core 3 on socket 0 00:03:20.621 EAL: Detected lcore 28 as core 4 on socket 0 00:03:20.621 EAL: Detected lcore 29 as core 5 on socket 0 00:03:20.621 EAL: Detected lcore 30 as core 8 on socket 0 00:03:20.621 EAL: Detected lcore 31 as core 9 on socket 0 00:03:20.621 EAL: Detected lcore 32 as core 10 on socket 0 00:03:20.621 EAL: Detected lcore 33 as core 11 on socket 0 00:03:20.621 EAL: Detected lcore 34 as core 12 on socket 0 00:03:20.621 EAL: Detected lcore 35 as core 13 on socket 0 00:03:20.621 EAL: Detected lcore 36 as core 0 on socket 1 00:03:20.621 EAL: Detected lcore 37 as core 1 on socket 1 00:03:20.621 EAL: Detected lcore 38 as core 2 on socket 1 00:03:20.621 EAL: Detected lcore 39 as core 3 on socket 1 00:03:20.621 EAL: Detected lcore 40 as core 4 on socket 1 00:03:20.621 EAL: Detected lcore 41 as core 5 on socket 1 00:03:20.621 EAL: Detected lcore 42 as core 8 on socket 1 00:03:20.621 EAL: Detected lcore 43 as core 9 on socket 1 00:03:20.621 EAL: Detected lcore 44 as core 10 on socket 1 00:03:20.621 EAL: Detected lcore 45 as core 11 on socket 1 00:03:20.621 EAL: Detected lcore 46 as core 12 on socket 1 00:03:20.621 EAL: Detected lcore 47 as core 13 on socket 1 00:03:20.621 EAL: Maximum logical cores by configuration: 128 00:03:20.621 EAL: Detected CPU lcores: 48 00:03:20.621 EAL: Detected NUMA nodes: 2 00:03:20.621 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:20.621 EAL: Detected shared linkage of DPDK 00:03:20.621 EAL: No shared files mode enabled, IPC will be disabled 00:03:20.621 EAL: Bus pci wants IOVA as 'DC' 00:03:20.621 EAL: Buses did not request a specific IOVA mode. 00:03:20.621 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:20.621 EAL: Selected IOVA mode 'VA' 00:03:20.621 EAL: No free 2048 kB hugepages reported on node 1 00:03:20.621 EAL: Probing VFIO support... 00:03:20.621 EAL: IOMMU type 1 (Type 1) is supported 00:03:20.621 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:20.621 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:20.621 EAL: VFIO support initialized 00:03:20.621 EAL: Ask a virtual area of 0x2e000 bytes 00:03:20.621 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:20.621 EAL: Setting up physically contiguous memory... 00:03:20.621 EAL: Setting maximum number of open files to 524288 00:03:20.621 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:20.621 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:20.621 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:20.621 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.621 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:20.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.621 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.621 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:20.621 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:20.621 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.621 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:20.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.621 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.621 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:20.621 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:20.621 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.621 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:20.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.621 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.621 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:20.621 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:20.621 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.621 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:20.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.621 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.621 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:20.621 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:20.621 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:20.621 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.621 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:20.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.622 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:20.622 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:20.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.622 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:20.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.622 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:20.622 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:20.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.622 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:20.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.622 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:20.622 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:20.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.622 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:20.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.622 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:20.622 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:20.622 EAL: Hugepages will be freed exactly as allocated. 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: TSC frequency is ~2700000 KHz 00:03:20.622 EAL: Main lcore 0 is ready (tid=7f8c95de4a00;cpuset=[0]) 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 0 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 2MB 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:20.622 EAL: Mem event callback 'spdk:(nil)' registered 00:03:20.622 00:03:20.622 00:03:20.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.622 http://cunit.sourceforge.net/ 00:03:20.622 00:03:20.622 00:03:20.622 Suite: components_suite 00:03:20.622 Test: vtophys_malloc_test ...passed 00:03:20.622 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 4 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 4MB 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was shrunk by 4MB 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 4 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 6MB 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was shrunk by 6MB 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 4 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 10MB 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was shrunk by 10MB 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 4 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 18MB 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was shrunk by 18MB 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 4 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 34MB 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was shrunk by 34MB 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 4 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 66MB 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was shrunk by 66MB 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.622 EAL: Restoring previous memory policy: 4 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was expanded by 130MB 00:03:20.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.622 EAL: request: mp_malloc_sync 00:03:20.622 EAL: No shared files mode enabled, IPC is disabled 00:03:20.622 EAL: Heap on socket 0 was shrunk by 130MB 00:03:20.622 EAL: Trying to obtain current memory policy. 00:03:20.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.882 EAL: Restoring previous memory policy: 4 00:03:20.882 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.882 EAL: request: mp_malloc_sync 00:03:20.882 EAL: No shared files mode enabled, IPC is disabled 00:03:20.882 EAL: Heap on socket 0 was expanded by 258MB 00:03:20.882 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.882 EAL: request: mp_malloc_sync 00:03:20.882 EAL: No shared files mode enabled, IPC is disabled 00:03:20.882 EAL: Heap on socket 0 was shrunk by 258MB 00:03:20.882 EAL: Trying to obtain current memory policy. 00:03:20.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.142 EAL: Restoring previous memory policy: 4 00:03:21.142 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.142 EAL: request: mp_malloc_sync 00:03:21.142 EAL: No shared files mode enabled, IPC is disabled 00:03:21.142 EAL: Heap on socket 0 was expanded by 514MB 00:03:21.142 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.142 EAL: request: mp_malloc_sync 00:03:21.142 EAL: No shared files mode enabled, IPC is disabled 00:03:21.142 EAL: Heap on socket 0 was shrunk by 514MB 00:03:21.142 EAL: Trying to obtain current memory policy. 00:03:21.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.401 EAL: Restoring previous memory policy: 4 00:03:21.401 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.401 EAL: request: mp_malloc_sync 00:03:21.401 EAL: No shared files mode enabled, IPC is disabled 00:03:21.401 EAL: Heap on socket 0 was expanded by 1026MB 00:03:21.661 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.920 EAL: request: mp_malloc_sync 00:03:21.920 EAL: No shared files mode enabled, IPC is disabled 00:03:21.920 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:21.920 passed 00:03:21.920 00:03:21.921 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.921 suites 1 1 n/a 0 0 00:03:21.921 tests 2 2 2 0 0 00:03:21.921 asserts 497 497 497 0 n/a 00:03:21.921 00:03:21.921 Elapsed time = 1.381 seconds 00:03:21.921 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.921 EAL: request: mp_malloc_sync 00:03:21.921 EAL: No shared files mode enabled, IPC is disabled 00:03:21.921 EAL: Heap on socket 0 was shrunk by 2MB 00:03:21.921 EAL: No shared files mode enabled, IPC is disabled 00:03:21.921 EAL: No shared files mode enabled, IPC is disabled 00:03:21.921 EAL: No shared files mode enabled, IPC is disabled 00:03:21.921 00:03:21.921 real 0m1.494s 00:03:21.921 user 0m0.873s 00:03:21.921 sys 0m0.592s 00:03:21.921 00:07:13 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.921 00:07:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:21.921 ************************************ 00:03:21.921 END TEST env_vtophys 00:03:21.921 ************************************ 00:03:21.921 00:07:13 env -- common/autotest_common.sh@1142 -- # return 0 00:03:21.921 00:07:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:21.921 00:07:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.921 00:07:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.921 00:07:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.921 ************************************ 00:03:21.921 START TEST env_pci 00:03:21.921 ************************************ 00:03:21.921 00:07:13 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:21.921 00:03:21.921 00:03:21.921 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.921 http://cunit.sourceforge.net/ 00:03:21.921 00:03:21.921 00:03:21.921 Suite: pci 00:03:21.921 Test: pci_hook ...[2024-07-16 00:07:13.516712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2157001 has claimed it 00:03:22.180 EAL: Cannot find device (10000:00:01.0) 00:03:22.180 EAL: Failed to attach device on primary process 00:03:22.180 passed 00:03:22.180 00:03:22.180 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.180 suites 1 1 n/a 0 0 00:03:22.180 tests 1 1 1 0 0 00:03:22.180 asserts 25 25 25 0 n/a 00:03:22.180 00:03:22.180 Elapsed time = 0.022 seconds 00:03:22.180 00:03:22.180 real 0m0.035s 00:03:22.180 user 0m0.011s 00:03:22.180 sys 0m0.024s 00:03:22.180 00:07:13 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.180 00:07:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:22.180 ************************************ 00:03:22.180 END TEST env_pci 00:03:22.180 ************************************ 00:03:22.180 00:07:13 env -- common/autotest_common.sh@1142 -- # return 0 00:03:22.180 00:07:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:22.180 00:07:13 env -- env/env.sh@15 -- # uname 00:03:22.180 00:07:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:22.180 00:07:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:22.180 00:07:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.180 00:07:13 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:22.180 00:07:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.180 00:07:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.180 ************************************ 00:03:22.180 START TEST env_dpdk_post_init 00:03:22.180 ************************************ 00:03:22.180 00:07:13 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.180 EAL: Detected CPU lcores: 48 00:03:22.180 EAL: Detected NUMA nodes: 2 00:03:22.180 EAL: Detected shared linkage of DPDK 00:03:22.180 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.180 EAL: Selected IOVA mode 'VA' 00:03:22.180 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.180 EAL: VFIO support initialized 00:03:22.180 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.180 EAL: Using IOMMU type 1 (Type 1) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:22.180 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:22.439 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:23.377 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:26.696 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:26.696 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:26.696 Starting DPDK initialization... 00:03:26.696 Starting SPDK post initialization... 00:03:26.696 SPDK NVMe probe 00:03:26.696 Attaching to 0000:88:00.0 00:03:26.696 Attached to 0000:88:00.0 00:03:26.696 Cleaning up... 00:03:26.696 00:03:26.696 real 0m4.432s 00:03:26.696 user 0m3.309s 00:03:26.696 sys 0m0.187s 00:03:26.696 00:07:18 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.696 00:07:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:26.696 ************************************ 00:03:26.696 END TEST env_dpdk_post_init 00:03:26.696 ************************************ 00:03:26.696 00:07:18 env -- common/autotest_common.sh@1142 -- # return 0 00:03:26.696 00:07:18 env -- env/env.sh@26 -- # uname 00:03:26.696 00:07:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:26.696 00:07:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.696 00:07:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.696 00:07:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.696 00:07:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.696 ************************************ 00:03:26.696 START TEST env_mem_callbacks 00:03:26.696 ************************************ 00:03:26.696 00:07:18 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.696 EAL: Detected CPU lcores: 48 00:03:26.696 EAL: Detected NUMA nodes: 2 00:03:26.696 EAL: Detected shared linkage of DPDK 00:03:26.696 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.696 EAL: Selected IOVA mode 'VA' 00:03:26.696 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.696 EAL: VFIO support initialized 00:03:26.696 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.696 00:03:26.696 00:03:26.696 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.696 http://cunit.sourceforge.net/ 00:03:26.696 00:03:26.696 00:03:26.696 Suite: memory 00:03:26.696 Test: test ... 00:03:26.696 register 0x200000200000 2097152 00:03:26.696 malloc 3145728 00:03:26.696 register 0x200000400000 4194304 00:03:26.696 buf 0x200000500000 len 3145728 PASSED 00:03:26.696 malloc 64 00:03:26.696 buf 0x2000004fff40 len 64 PASSED 00:03:26.696 malloc 4194304 00:03:26.696 register 0x200000800000 6291456 00:03:26.696 buf 0x200000a00000 len 4194304 PASSED 00:03:26.696 free 0x200000500000 3145728 00:03:26.696 free 0x2000004fff40 64 00:03:26.696 unregister 0x200000400000 4194304 PASSED 00:03:26.696 free 0x200000a00000 4194304 00:03:26.696 unregister 0x200000800000 6291456 PASSED 00:03:26.696 malloc 8388608 00:03:26.696 register 0x200000400000 10485760 00:03:26.696 buf 0x200000600000 len 8388608 PASSED 00:03:26.696 free 0x200000600000 8388608 00:03:26.696 unregister 0x200000400000 10485760 PASSED 00:03:26.696 passed 00:03:26.696 00:03:26.696 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.696 suites 1 1 n/a 0 0 00:03:26.696 tests 1 1 1 0 0 00:03:26.696 asserts 15 15 15 0 n/a 00:03:26.696 00:03:26.696 Elapsed time = 0.005 seconds 00:03:26.696 00:03:26.696 real 0m0.044s 00:03:26.696 user 0m0.014s 00:03:26.696 sys 0m0.030s 00:03:26.696 00:07:18 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.696 00:07:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:26.696 ************************************ 00:03:26.696 END TEST env_mem_callbacks 00:03:26.696 ************************************ 00:03:26.696 00:07:18 env -- common/autotest_common.sh@1142 -- # return 0 00:03:26.696 00:03:26.696 real 0m6.431s 00:03:26.696 user 0m4.460s 00:03:26.696 sys 0m1.024s 00:03:26.696 00:07:18 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.696 00:07:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.696 ************************************ 00:03:26.696 END TEST env 00:03:26.696 ************************************ 00:03:26.696 00:07:18 -- common/autotest_common.sh@1142 -- # return 0 00:03:26.696 00:07:18 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.696 00:07:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.696 00:07:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.696 00:07:18 -- common/autotest_common.sh@10 -- # set +x 00:03:26.696 ************************************ 00:03:26.696 START TEST rpc 00:03:26.696 ************************************ 00:03:26.696 00:07:18 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.696 * Looking for test storage... 00:03:26.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:26.696 00:07:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2157718 00:03:26.696 00:07:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:26.696 00:07:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:26.696 00:07:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2157718 00:03:26.696 00:07:18 rpc -- common/autotest_common.sh@829 -- # '[' -z 2157718 ']' 00:03:26.696 00:07:18 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.696 00:07:18 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:26.696 00:07:18 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.696 00:07:18 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:26.696 00:07:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.696 [2024-07-16 00:07:18.272253] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:26.697 [2024-07-16 00:07:18.272336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157718 ] 00:03:26.697 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.956 [2024-07-16 00:07:18.335692] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:26.956 [2024-07-16 00:07:18.455948] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:26.956 [2024-07-16 00:07:18.456000] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2157718' to capture a snapshot of events at runtime. 00:03:26.956 [2024-07-16 00:07:18.456031] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:26.956 [2024-07-16 00:07:18.456043] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:26.956 [2024-07-16 00:07:18.456053] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2157718 for offline analysis/debug. 00:03:26.956 [2024-07-16 00:07:18.456081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.216 00:07:18 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:27.216 00:07:18 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:27.216 00:07:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.216 00:07:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.216 00:07:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:27.216 00:07:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:27.216 00:07:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.216 00:07:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.216 00:07:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.216 ************************************ 00:03:27.216 START TEST rpc_integrity 00:03:27.216 ************************************ 00:03:27.216 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:27.216 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:27.216 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.216 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.216 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.216 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:27.216 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:27.216 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:27.216 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:27.216 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.216 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.216 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.217 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:27.217 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:27.217 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.217 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.217 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.217 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:27.217 { 00:03:27.217 "name": "Malloc0", 00:03:27.217 "aliases": [ 00:03:27.217 "75eb7cbe-e4bd-4163-a6b0-85befa0153f8" 00:03:27.217 ], 00:03:27.217 "product_name": "Malloc disk", 00:03:27.217 "block_size": 512, 00:03:27.217 "num_blocks": 16384, 00:03:27.217 "uuid": "75eb7cbe-e4bd-4163-a6b0-85befa0153f8", 00:03:27.217 "assigned_rate_limits": { 00:03:27.217 "rw_ios_per_sec": 0, 00:03:27.217 "rw_mbytes_per_sec": 0, 00:03:27.217 "r_mbytes_per_sec": 0, 00:03:27.217 "w_mbytes_per_sec": 0 00:03:27.217 }, 00:03:27.217 "claimed": false, 00:03:27.217 "zoned": false, 00:03:27.217 "supported_io_types": { 00:03:27.217 "read": true, 00:03:27.217 "write": true, 00:03:27.217 "unmap": true, 00:03:27.217 "flush": true, 00:03:27.217 "reset": true, 00:03:27.217 "nvme_admin": false, 00:03:27.217 "nvme_io": false, 00:03:27.217 "nvme_io_md": false, 00:03:27.217 "write_zeroes": true, 00:03:27.217 "zcopy": true, 00:03:27.217 "get_zone_info": false, 00:03:27.217 "zone_management": false, 00:03:27.217 "zone_append": false, 00:03:27.217 "compare": false, 00:03:27.217 "compare_and_write": false, 00:03:27.217 "abort": true, 00:03:27.217 "seek_hole": false, 00:03:27.217 "seek_data": false, 00:03:27.217 "copy": true, 00:03:27.217 "nvme_iov_md": false 00:03:27.217 }, 00:03:27.217 "memory_domains": [ 00:03:27.217 { 00:03:27.217 "dma_device_id": "system", 00:03:27.217 "dma_device_type": 1 00:03:27.217 }, 00:03:27.217 { 00:03:27.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.217 "dma_device_type": 2 00:03:27.217 } 00:03:27.217 ], 00:03:27.217 "driver_specific": {} 00:03:27.217 } 00:03:27.217 ]' 00:03:27.217 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 [2024-07-16 00:07:18.854584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:27.477 [2024-07-16 00:07:18.854634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:27.477 [2024-07-16 00:07:18.854658] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x175aeb0 00:03:27.477 [2024-07-16 00:07:18.854694] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:27.477 [2024-07-16 00:07:18.856148] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:27.477 [2024-07-16 00:07:18.856188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:27.477 Passthru0 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:27.477 { 00:03:27.477 "name": "Malloc0", 00:03:27.477 "aliases": [ 00:03:27.477 "75eb7cbe-e4bd-4163-a6b0-85befa0153f8" 00:03:27.477 ], 00:03:27.477 "product_name": "Malloc disk", 00:03:27.477 "block_size": 512, 00:03:27.477 "num_blocks": 16384, 00:03:27.477 "uuid": "75eb7cbe-e4bd-4163-a6b0-85befa0153f8", 00:03:27.477 "assigned_rate_limits": { 00:03:27.477 "rw_ios_per_sec": 0, 00:03:27.477 "rw_mbytes_per_sec": 0, 00:03:27.477 "r_mbytes_per_sec": 0, 00:03:27.477 "w_mbytes_per_sec": 0 00:03:27.477 }, 00:03:27.477 "claimed": true, 00:03:27.477 "claim_type": "exclusive_write", 00:03:27.477 "zoned": false, 00:03:27.477 "supported_io_types": { 00:03:27.477 "read": true, 00:03:27.477 "write": true, 00:03:27.477 "unmap": true, 00:03:27.477 "flush": true, 00:03:27.477 "reset": true, 00:03:27.477 "nvme_admin": false, 00:03:27.477 "nvme_io": false, 00:03:27.477 "nvme_io_md": false, 00:03:27.477 "write_zeroes": true, 00:03:27.477 "zcopy": true, 00:03:27.477 "get_zone_info": false, 00:03:27.477 "zone_management": false, 00:03:27.477 "zone_append": false, 00:03:27.477 "compare": false, 00:03:27.477 "compare_and_write": false, 00:03:27.477 "abort": true, 00:03:27.477 "seek_hole": false, 00:03:27.477 "seek_data": false, 00:03:27.477 "copy": true, 00:03:27.477 "nvme_iov_md": false 00:03:27.477 }, 00:03:27.477 "memory_domains": [ 00:03:27.477 { 00:03:27.477 "dma_device_id": "system", 00:03:27.477 "dma_device_type": 1 00:03:27.477 }, 00:03:27.477 { 00:03:27.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.477 "dma_device_type": 2 00:03:27.477 } 00:03:27.477 ], 00:03:27.477 "driver_specific": {} 00:03:27.477 }, 00:03:27.477 { 00:03:27.477 "name": "Passthru0", 00:03:27.477 "aliases": [ 00:03:27.477 "e3204800-b6c6-58ca-9096-03b66a8a622a" 00:03:27.477 ], 00:03:27.477 "product_name": "passthru", 00:03:27.477 "block_size": 512, 00:03:27.477 "num_blocks": 16384, 00:03:27.477 "uuid": "e3204800-b6c6-58ca-9096-03b66a8a622a", 00:03:27.477 "assigned_rate_limits": { 00:03:27.477 "rw_ios_per_sec": 0, 00:03:27.477 "rw_mbytes_per_sec": 0, 00:03:27.477 "r_mbytes_per_sec": 0, 00:03:27.477 "w_mbytes_per_sec": 0 00:03:27.477 }, 00:03:27.477 "claimed": false, 00:03:27.477 "zoned": false, 00:03:27.477 "supported_io_types": { 00:03:27.477 "read": true, 00:03:27.477 "write": true, 00:03:27.477 "unmap": true, 00:03:27.477 "flush": true, 00:03:27.477 "reset": true, 00:03:27.477 "nvme_admin": false, 00:03:27.477 "nvme_io": false, 00:03:27.477 "nvme_io_md": false, 00:03:27.477 "write_zeroes": true, 00:03:27.477 "zcopy": true, 00:03:27.477 "get_zone_info": false, 00:03:27.477 "zone_management": false, 00:03:27.477 "zone_append": false, 00:03:27.477 "compare": false, 00:03:27.477 "compare_and_write": false, 00:03:27.477 "abort": true, 00:03:27.477 "seek_hole": false, 00:03:27.477 "seek_data": false, 00:03:27.477 "copy": true, 00:03:27.477 "nvme_iov_md": false 00:03:27.477 }, 00:03:27.477 "memory_domains": [ 00:03:27.477 { 00:03:27.477 "dma_device_id": "system", 00:03:27.477 "dma_device_type": 1 00:03:27.477 }, 00:03:27.477 { 00:03:27.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.477 "dma_device_type": 2 00:03:27.477 } 00:03:27.477 ], 00:03:27.477 "driver_specific": { 00:03:27.477 "passthru": { 00:03:27.477 "name": "Passthru0", 00:03:27.477 "base_bdev_name": "Malloc0" 00:03:27.477 } 00:03:27.477 } 00:03:27.477 } 00:03:27.477 ]' 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:27.477 00:07:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:27.477 00:03:27.477 real 0m0.227s 00:03:27.477 user 0m0.147s 00:03:27.477 sys 0m0.026s 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.477 00:07:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 ************************************ 00:03:27.477 END TEST rpc_integrity 00:03:27.477 ************************************ 00:03:27.477 00:07:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:27.477 00:07:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:27.477 00:07:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.477 00:07:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.477 00:07:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 ************************************ 00:03:27.477 START TEST rpc_plugins 00:03:27.477 ************************************ 00:03:27.477 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:27.477 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:27.477 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.477 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.477 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:27.477 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:27.477 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.477 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.477 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.477 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:27.477 { 00:03:27.477 "name": "Malloc1", 00:03:27.477 "aliases": [ 00:03:27.477 "c0140129-3c5b-40de-8e3f-2057d4f1fd06" 00:03:27.477 ], 00:03:27.477 "product_name": "Malloc disk", 00:03:27.477 "block_size": 4096, 00:03:27.477 "num_blocks": 256, 00:03:27.477 "uuid": "c0140129-3c5b-40de-8e3f-2057d4f1fd06", 00:03:27.477 "assigned_rate_limits": { 00:03:27.477 "rw_ios_per_sec": 0, 00:03:27.477 "rw_mbytes_per_sec": 0, 00:03:27.477 "r_mbytes_per_sec": 0, 00:03:27.477 "w_mbytes_per_sec": 0 00:03:27.477 }, 00:03:27.477 "claimed": false, 00:03:27.477 "zoned": false, 00:03:27.477 "supported_io_types": { 00:03:27.477 "read": true, 00:03:27.477 "write": true, 00:03:27.477 "unmap": true, 00:03:27.477 "flush": true, 00:03:27.477 "reset": true, 00:03:27.477 "nvme_admin": false, 00:03:27.477 "nvme_io": false, 00:03:27.477 "nvme_io_md": false, 00:03:27.477 "write_zeroes": true, 00:03:27.477 "zcopy": true, 00:03:27.477 "get_zone_info": false, 00:03:27.477 "zone_management": false, 00:03:27.477 "zone_append": false, 00:03:27.477 "compare": false, 00:03:27.478 "compare_and_write": false, 00:03:27.478 "abort": true, 00:03:27.478 "seek_hole": false, 00:03:27.478 "seek_data": false, 00:03:27.478 "copy": true, 00:03:27.478 "nvme_iov_md": false 00:03:27.478 }, 00:03:27.478 "memory_domains": [ 00:03:27.478 { 00:03:27.478 "dma_device_id": "system", 00:03:27.478 "dma_device_type": 1 00:03:27.478 }, 00:03:27.478 { 00:03:27.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.478 "dma_device_type": 2 00:03:27.478 } 00:03:27.478 ], 00:03:27.478 "driver_specific": {} 00:03:27.478 } 00:03:27.478 ]' 00:03:27.478 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:27.478 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:27.478 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:27.478 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.478 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.478 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.478 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:27.478 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.478 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.736 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.736 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:27.736 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:27.736 00:07:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:27.736 00:03:27.736 real 0m0.109s 00:03:27.736 user 0m0.069s 00:03:27.736 sys 0m0.012s 00:03:27.736 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.736 00:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.736 ************************************ 00:03:27.736 END TEST rpc_plugins 00:03:27.736 ************************************ 00:03:27.736 00:07:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:27.736 00:07:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:27.736 00:07:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.736 00:07:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.736 00:07:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.736 ************************************ 00:03:27.736 START TEST rpc_trace_cmd_test 00:03:27.736 ************************************ 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:27.736 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2157718", 00:03:27.736 "tpoint_group_mask": "0x8", 00:03:27.736 "iscsi_conn": { 00:03:27.736 "mask": "0x2", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "scsi": { 00:03:27.736 "mask": "0x4", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "bdev": { 00:03:27.736 "mask": "0x8", 00:03:27.736 "tpoint_mask": "0xffffffffffffffff" 00:03:27.736 }, 00:03:27.736 "nvmf_rdma": { 00:03:27.736 "mask": "0x10", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "nvmf_tcp": { 00:03:27.736 "mask": "0x20", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "ftl": { 00:03:27.736 "mask": "0x40", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "blobfs": { 00:03:27.736 "mask": "0x80", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "dsa": { 00:03:27.736 "mask": "0x200", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "thread": { 00:03:27.736 "mask": "0x400", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "nvme_pcie": { 00:03:27.736 "mask": "0x800", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "iaa": { 00:03:27.736 "mask": "0x1000", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "nvme_tcp": { 00:03:27.736 "mask": "0x2000", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "bdev_nvme": { 00:03:27.736 "mask": "0x4000", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 }, 00:03:27.736 "sock": { 00:03:27.736 "mask": "0x8000", 00:03:27.736 "tpoint_mask": "0x0" 00:03:27.736 } 00:03:27.736 }' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:27.736 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:27.995 00:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:27.995 00:03:27.995 real 0m0.191s 00:03:27.995 user 0m0.171s 00:03:27.995 sys 0m0.015s 00:03:27.995 00:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.995 00:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:27.995 ************************************ 00:03:27.995 END TEST rpc_trace_cmd_test 00:03:27.995 ************************************ 00:03:27.995 00:07:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:27.995 00:07:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:27.995 00:07:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:27.995 00:07:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:27.995 00:07:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.995 00:07:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.995 00:07:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.995 ************************************ 00:03:27.995 START TEST rpc_daemon_integrity 00:03:27.995 ************************************ 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:27.995 { 00:03:27.995 "name": "Malloc2", 00:03:27.995 "aliases": [ 00:03:27.995 "359d2327-3c25-44b9-ba6f-afc5b6abdedc" 00:03:27.995 ], 00:03:27.995 "product_name": "Malloc disk", 00:03:27.995 "block_size": 512, 00:03:27.995 "num_blocks": 16384, 00:03:27.995 "uuid": "359d2327-3c25-44b9-ba6f-afc5b6abdedc", 00:03:27.995 "assigned_rate_limits": { 00:03:27.995 "rw_ios_per_sec": 0, 00:03:27.995 "rw_mbytes_per_sec": 0, 00:03:27.995 "r_mbytes_per_sec": 0, 00:03:27.995 "w_mbytes_per_sec": 0 00:03:27.995 }, 00:03:27.995 "claimed": false, 00:03:27.995 "zoned": false, 00:03:27.995 "supported_io_types": { 00:03:27.995 "read": true, 00:03:27.995 "write": true, 00:03:27.995 "unmap": true, 00:03:27.995 "flush": true, 00:03:27.995 "reset": true, 00:03:27.995 "nvme_admin": false, 00:03:27.995 "nvme_io": false, 00:03:27.995 "nvme_io_md": false, 00:03:27.995 "write_zeroes": true, 00:03:27.995 "zcopy": true, 00:03:27.995 "get_zone_info": false, 00:03:27.995 "zone_management": false, 00:03:27.995 "zone_append": false, 00:03:27.995 "compare": false, 00:03:27.995 "compare_and_write": false, 00:03:27.995 "abort": true, 00:03:27.995 "seek_hole": false, 00:03:27.995 "seek_data": false, 00:03:27.995 "copy": true, 00:03:27.995 "nvme_iov_md": false 00:03:27.995 }, 00:03:27.995 "memory_domains": [ 00:03:27.995 { 00:03:27.995 "dma_device_id": "system", 00:03:27.995 "dma_device_type": 1 00:03:27.995 }, 00:03:27.995 { 00:03:27.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.995 "dma_device_type": 2 00:03:27.995 } 00:03:27.995 ], 00:03:27.995 "driver_specific": {} 00:03:27.995 } 00:03:27.995 ]' 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.995 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.995 [2024-07-16 00:07:19.520618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:27.995 [2024-07-16 00:07:19.520668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:27.995 [2024-07-16 00:07:19.520696] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1753f40 00:03:27.995 [2024-07-16 00:07:19.520727] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:27.995 [2024-07-16 00:07:19.522091] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:27.996 [2024-07-16 00:07:19.522116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:27.996 Passthru0 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:27.996 { 00:03:27.996 "name": "Malloc2", 00:03:27.996 "aliases": [ 00:03:27.996 "359d2327-3c25-44b9-ba6f-afc5b6abdedc" 00:03:27.996 ], 00:03:27.996 "product_name": "Malloc disk", 00:03:27.996 "block_size": 512, 00:03:27.996 "num_blocks": 16384, 00:03:27.996 "uuid": "359d2327-3c25-44b9-ba6f-afc5b6abdedc", 00:03:27.996 "assigned_rate_limits": { 00:03:27.996 "rw_ios_per_sec": 0, 00:03:27.996 "rw_mbytes_per_sec": 0, 00:03:27.996 "r_mbytes_per_sec": 0, 00:03:27.996 "w_mbytes_per_sec": 0 00:03:27.996 }, 00:03:27.996 "claimed": true, 00:03:27.996 "claim_type": "exclusive_write", 00:03:27.996 "zoned": false, 00:03:27.996 "supported_io_types": { 00:03:27.996 "read": true, 00:03:27.996 "write": true, 00:03:27.996 "unmap": true, 00:03:27.996 "flush": true, 00:03:27.996 "reset": true, 00:03:27.996 "nvme_admin": false, 00:03:27.996 "nvme_io": false, 00:03:27.996 "nvme_io_md": false, 00:03:27.996 "write_zeroes": true, 00:03:27.996 "zcopy": true, 00:03:27.996 "get_zone_info": false, 00:03:27.996 "zone_management": false, 00:03:27.996 "zone_append": false, 00:03:27.996 "compare": false, 00:03:27.996 "compare_and_write": false, 00:03:27.996 "abort": true, 00:03:27.996 "seek_hole": false, 00:03:27.996 "seek_data": false, 00:03:27.996 "copy": true, 00:03:27.996 "nvme_iov_md": false 00:03:27.996 }, 00:03:27.996 "memory_domains": [ 00:03:27.996 { 00:03:27.996 "dma_device_id": "system", 00:03:27.996 "dma_device_type": 1 00:03:27.996 }, 00:03:27.996 { 00:03:27.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.996 "dma_device_type": 2 00:03:27.996 } 00:03:27.996 ], 00:03:27.996 "driver_specific": {} 00:03:27.996 }, 00:03:27.996 { 00:03:27.996 "name": "Passthru0", 00:03:27.996 "aliases": [ 00:03:27.996 "3cc0a568-a32b-5a86-b257-59c8d88685e7" 00:03:27.996 ], 00:03:27.996 "product_name": "passthru", 00:03:27.996 "block_size": 512, 00:03:27.996 "num_blocks": 16384, 00:03:27.996 "uuid": "3cc0a568-a32b-5a86-b257-59c8d88685e7", 00:03:27.996 "assigned_rate_limits": { 00:03:27.996 "rw_ios_per_sec": 0, 00:03:27.996 "rw_mbytes_per_sec": 0, 00:03:27.996 "r_mbytes_per_sec": 0, 00:03:27.996 "w_mbytes_per_sec": 0 00:03:27.996 }, 00:03:27.996 "claimed": false, 00:03:27.996 "zoned": false, 00:03:27.996 "supported_io_types": { 00:03:27.996 "read": true, 00:03:27.996 "write": true, 00:03:27.996 "unmap": true, 00:03:27.996 "flush": true, 00:03:27.996 "reset": true, 00:03:27.996 "nvme_admin": false, 00:03:27.996 "nvme_io": false, 00:03:27.996 "nvme_io_md": false, 00:03:27.996 "write_zeroes": true, 00:03:27.996 "zcopy": true, 00:03:27.996 "get_zone_info": false, 00:03:27.996 "zone_management": false, 00:03:27.996 "zone_append": false, 00:03:27.996 "compare": false, 00:03:27.996 "compare_and_write": false, 00:03:27.996 "abort": true, 00:03:27.996 "seek_hole": false, 00:03:27.996 "seek_data": false, 00:03:27.996 "copy": true, 00:03:27.996 "nvme_iov_md": false 00:03:27.996 }, 00:03:27.996 "memory_domains": [ 00:03:27.996 { 00:03:27.996 "dma_device_id": "system", 00:03:27.996 "dma_device_type": 1 00:03:27.996 }, 00:03:27.996 { 00:03:27.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.996 "dma_device_type": 2 00:03:27.996 } 00:03:27.996 ], 00:03:27.996 "driver_specific": { 00:03:27.996 "passthru": { 00:03:27.996 "name": "Passthru0", 00:03:27.996 "base_bdev_name": "Malloc2" 00:03:27.996 } 00:03:27.996 } 00:03:27.996 } 00:03:27.996 ]' 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:27.996 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.255 00:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.255 00:03:28.255 real 0m0.227s 00:03:28.255 user 0m0.148s 00:03:28.255 sys 0m0.022s 00:03:28.255 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.255 00:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.255 ************************************ 00:03:28.255 END TEST rpc_daemon_integrity 00:03:28.255 ************************************ 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:28.255 00:07:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:28.255 00:07:19 rpc -- rpc/rpc.sh@84 -- # killprocess 2157718 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@948 -- # '[' -z 2157718 ']' 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@952 -- # kill -0 2157718 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@953 -- # uname 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2157718 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2157718' 00:03:28.255 killing process with pid 2157718 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@967 -- # kill 2157718 00:03:28.255 00:07:19 rpc -- common/autotest_common.sh@972 -- # wait 2157718 00:03:28.822 00:03:28.822 real 0m1.973s 00:03:28.822 user 0m2.474s 00:03:28.822 sys 0m0.607s 00:03:28.822 00:07:20 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.822 00:07:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.822 ************************************ 00:03:28.822 END TEST rpc 00:03:28.822 ************************************ 00:03:28.822 00:07:20 -- common/autotest_common.sh@1142 -- # return 0 00:03:28.822 00:07:20 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:28.822 00:07:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.822 00:07:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.822 00:07:20 -- common/autotest_common.sh@10 -- # set +x 00:03:28.822 ************************************ 00:03:28.822 START TEST skip_rpc 00:03:28.822 ************************************ 00:03:28.822 00:07:20 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:28.822 * Looking for test storage... 00:03:28.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.822 00:07:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:28.822 00:07:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:28.822 00:07:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:28.822 00:07:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.822 00:07:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.822 00:07:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.822 ************************************ 00:03:28.822 START TEST skip_rpc 00:03:28.822 ************************************ 00:03:28.822 00:07:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:28.822 00:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2158095 00:03:28.822 00:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:28.822 00:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.822 00:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:28.822 [2024-07-16 00:07:20.323218] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:28.822 [2024-07-16 00:07:20.323279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158095 ] 00:03:28.822 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.822 [2024-07-16 00:07:20.381830] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.079 [2024-07-16 00:07:20.500170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2158095 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2158095 ']' 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2158095 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2158095 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2158095' 00:03:34.354 killing process with pid 2158095 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2158095 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2158095 00:03:34.354 00:03:34.354 real 0m5.500s 00:03:34.354 user 0m5.186s 00:03:34.354 sys 0m0.318s 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.354 00:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.354 ************************************ 00:03:34.354 END TEST skip_rpc 00:03:34.354 ************************************ 00:03:34.354 00:07:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:34.354 00:07:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:34.354 00:07:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.354 00:07:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.354 00:07:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.354 ************************************ 00:03:34.354 START TEST skip_rpc_with_json 00:03:34.354 ************************************ 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2158786 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2158786 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2158786 ']' 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:34.354 00:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.354 [2024-07-16 00:07:25.871724] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:34.354 [2024-07-16 00:07:25.871823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158786 ] 00:03:34.354 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.354 [2024-07-16 00:07:25.929285] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.613 [2024-07-16 00:07:26.040436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.872 [2024-07-16 00:07:26.300648] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:34.872 request: 00:03:34.872 { 00:03:34.872 "trtype": "tcp", 00:03:34.872 "method": "nvmf_get_transports", 00:03:34.872 "req_id": 1 00:03:34.872 } 00:03:34.872 Got JSON-RPC error response 00:03:34.872 response: 00:03:34.872 { 00:03:34.872 "code": -19, 00:03:34.872 "message": "No such device" 00:03:34.872 } 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.872 [2024-07-16 00:07:26.308784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.872 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:34.872 { 00:03:34.872 "subsystems": [ 00:03:34.872 { 00:03:34.872 "subsystem": "vfio_user_target", 00:03:34.872 "config": null 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "subsystem": "keyring", 00:03:34.872 "config": [] 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "subsystem": "iobuf", 00:03:34.872 "config": [ 00:03:34.872 { 00:03:34.872 "method": "iobuf_set_options", 00:03:34.872 "params": { 00:03:34.872 "small_pool_count": 8192, 00:03:34.872 "large_pool_count": 1024, 00:03:34.872 "small_bufsize": 8192, 00:03:34.872 "large_bufsize": 135168 00:03:34.872 } 00:03:34.872 } 00:03:34.872 ] 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "subsystem": "sock", 00:03:34.872 "config": [ 00:03:34.872 { 00:03:34.872 "method": "sock_set_default_impl", 00:03:34.872 "params": { 00:03:34.872 "impl_name": "posix" 00:03:34.872 } 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "method": "sock_impl_set_options", 00:03:34.872 "params": { 00:03:34.872 "impl_name": "ssl", 00:03:34.872 "recv_buf_size": 4096, 00:03:34.872 "send_buf_size": 4096, 00:03:34.872 "enable_recv_pipe": true, 00:03:34.872 "enable_quickack": false, 00:03:34.872 "enable_placement_id": 0, 00:03:34.872 "enable_zerocopy_send_server": true, 00:03:34.872 "enable_zerocopy_send_client": false, 00:03:34.872 "zerocopy_threshold": 0, 00:03:34.872 "tls_version": 0, 00:03:34.872 "enable_ktls": false 00:03:34.872 } 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "method": "sock_impl_set_options", 00:03:34.872 "params": { 00:03:34.872 "impl_name": "posix", 00:03:34.872 "recv_buf_size": 2097152, 00:03:34.872 "send_buf_size": 2097152, 00:03:34.872 "enable_recv_pipe": true, 00:03:34.872 "enable_quickack": false, 00:03:34.872 "enable_placement_id": 0, 00:03:34.872 "enable_zerocopy_send_server": true, 00:03:34.872 "enable_zerocopy_send_client": false, 00:03:34.872 "zerocopy_threshold": 0, 00:03:34.872 "tls_version": 0, 00:03:34.872 "enable_ktls": false 00:03:34.872 } 00:03:34.872 } 00:03:34.872 ] 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "subsystem": "vmd", 00:03:34.872 "config": [] 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "subsystem": "accel", 00:03:34.872 "config": [ 00:03:34.872 { 00:03:34.872 "method": "accel_set_options", 00:03:34.872 "params": { 00:03:34.872 "small_cache_size": 128, 00:03:34.872 "large_cache_size": 16, 00:03:34.872 "task_count": 2048, 00:03:34.872 "sequence_count": 2048, 00:03:34.872 "buf_count": 2048 00:03:34.872 } 00:03:34.872 } 00:03:34.872 ] 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "subsystem": "bdev", 00:03:34.872 "config": [ 00:03:34.872 { 00:03:34.872 "method": "bdev_set_options", 00:03:34.872 "params": { 00:03:34.872 "bdev_io_pool_size": 65535, 00:03:34.872 "bdev_io_cache_size": 256, 00:03:34.872 "bdev_auto_examine": true, 00:03:34.872 "iobuf_small_cache_size": 128, 00:03:34.872 "iobuf_large_cache_size": 16 00:03:34.872 } 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "method": "bdev_raid_set_options", 00:03:34.872 "params": { 00:03:34.872 "process_window_size_kb": 1024 00:03:34.872 } 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "method": "bdev_iscsi_set_options", 00:03:34.872 "params": { 00:03:34.872 "timeout_sec": 30 00:03:34.872 } 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "method": "bdev_nvme_set_options", 00:03:34.872 "params": { 00:03:34.872 "action_on_timeout": "none", 00:03:34.872 "timeout_us": 0, 00:03:34.872 "timeout_admin_us": 0, 00:03:34.872 "keep_alive_timeout_ms": 10000, 00:03:34.872 "arbitration_burst": 0, 00:03:34.872 "low_priority_weight": 0, 00:03:34.872 "medium_priority_weight": 0, 00:03:34.872 "high_priority_weight": 0, 00:03:34.872 "nvme_adminq_poll_period_us": 10000, 00:03:34.872 "nvme_ioq_poll_period_us": 0, 00:03:34.872 "io_queue_requests": 0, 00:03:34.872 "delay_cmd_submit": true, 00:03:34.872 "transport_retry_count": 4, 00:03:34.872 "bdev_retry_count": 3, 00:03:34.872 "transport_ack_timeout": 0, 00:03:34.872 "ctrlr_loss_timeout_sec": 0, 00:03:34.872 "reconnect_delay_sec": 0, 00:03:34.872 "fast_io_fail_timeout_sec": 0, 00:03:34.872 "disable_auto_failback": false, 00:03:34.872 "generate_uuids": false, 00:03:34.872 "transport_tos": 0, 00:03:34.872 "nvme_error_stat": false, 00:03:34.872 "rdma_srq_size": 0, 00:03:34.872 "io_path_stat": false, 00:03:34.872 "allow_accel_sequence": false, 00:03:34.872 "rdma_max_cq_size": 0, 00:03:34.872 "rdma_cm_event_timeout_ms": 0, 00:03:34.872 "dhchap_digests": [ 00:03:34.872 "sha256", 00:03:34.872 "sha384", 00:03:34.872 "sha512" 00:03:34.872 ], 00:03:34.872 "dhchap_dhgroups": [ 00:03:34.872 "null", 00:03:34.872 "ffdhe2048", 00:03:34.872 "ffdhe3072", 00:03:34.872 "ffdhe4096", 00:03:34.872 "ffdhe6144", 00:03:34.872 "ffdhe8192" 00:03:34.872 ] 00:03:34.872 } 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "method": "bdev_nvme_set_hotplug", 00:03:34.872 "params": { 00:03:34.872 "period_us": 100000, 00:03:34.872 "enable": false 00:03:34.872 } 00:03:34.872 }, 00:03:34.872 { 00:03:34.872 "method": "bdev_wait_for_examine" 00:03:34.872 } 00:03:34.872 ] 00:03:34.872 }, 00:03:34.872 { 00:03:34.873 "subsystem": "scsi", 00:03:34.873 "config": null 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "subsystem": "scheduler", 00:03:34.873 "config": [ 00:03:34.873 { 00:03:34.873 "method": "framework_set_scheduler", 00:03:34.873 "params": { 00:03:34.873 "name": "static" 00:03:34.873 } 00:03:34.873 } 00:03:34.873 ] 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "subsystem": "vhost_scsi", 00:03:34.873 "config": [] 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "subsystem": "vhost_blk", 00:03:34.873 "config": [] 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "subsystem": "ublk", 00:03:34.873 "config": [] 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "subsystem": "nbd", 00:03:34.873 "config": [] 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "subsystem": "nvmf", 00:03:34.873 "config": [ 00:03:34.873 { 00:03:34.873 "method": "nvmf_set_config", 00:03:34.873 "params": { 00:03:34.873 "discovery_filter": "match_any", 00:03:34.873 "admin_cmd_passthru": { 00:03:34.873 "identify_ctrlr": false 00:03:34.873 } 00:03:34.873 } 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "method": "nvmf_set_max_subsystems", 00:03:34.873 "params": { 00:03:34.873 "max_subsystems": 1024 00:03:34.873 } 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "method": "nvmf_set_crdt", 00:03:34.873 "params": { 00:03:34.873 "crdt1": 0, 00:03:34.873 "crdt2": 0, 00:03:34.873 "crdt3": 0 00:03:34.873 } 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "method": "nvmf_create_transport", 00:03:34.873 "params": { 00:03:34.873 "trtype": "TCP", 00:03:34.873 "max_queue_depth": 128, 00:03:34.873 "max_io_qpairs_per_ctrlr": 127, 00:03:34.873 "in_capsule_data_size": 4096, 00:03:34.873 "max_io_size": 131072, 00:03:34.873 "io_unit_size": 131072, 00:03:34.873 "max_aq_depth": 128, 00:03:34.873 "num_shared_buffers": 511, 00:03:34.873 "buf_cache_size": 4294967295, 00:03:34.873 "dif_insert_or_strip": false, 00:03:34.873 "zcopy": false, 00:03:34.873 "c2h_success": true, 00:03:34.873 "sock_priority": 0, 00:03:34.873 "abort_timeout_sec": 1, 00:03:34.873 "ack_timeout": 0, 00:03:34.873 "data_wr_pool_size": 0 00:03:34.873 } 00:03:34.873 } 00:03:34.873 ] 00:03:34.873 }, 00:03:34.873 { 00:03:34.873 "subsystem": "iscsi", 00:03:34.873 "config": [ 00:03:34.873 { 00:03:34.873 "method": "iscsi_set_options", 00:03:34.873 "params": { 00:03:34.873 "node_base": "iqn.2016-06.io.spdk", 00:03:34.873 "max_sessions": 128, 00:03:34.873 "max_connections_per_session": 2, 00:03:34.873 "max_queue_depth": 64, 00:03:34.873 "default_time2wait": 2, 00:03:34.873 "default_time2retain": 20, 00:03:34.873 "first_burst_length": 8192, 00:03:34.873 "immediate_data": true, 00:03:34.873 "allow_duplicated_isid": false, 00:03:34.873 "error_recovery_level": 0, 00:03:34.873 "nop_timeout": 60, 00:03:34.873 "nop_in_interval": 30, 00:03:34.873 "disable_chap": false, 00:03:34.873 "require_chap": false, 00:03:34.873 "mutual_chap": false, 00:03:34.873 "chap_group": 0, 00:03:34.873 "max_large_datain_per_connection": 64, 00:03:34.873 "max_r2t_per_connection": 4, 00:03:34.873 "pdu_pool_size": 36864, 00:03:34.873 "immediate_data_pool_size": 16384, 00:03:34.873 "data_out_pool_size": 2048 00:03:34.873 } 00:03:34.873 } 00:03:34.873 ] 00:03:34.873 } 00:03:34.873 ] 00:03:34.873 } 00:03:34.873 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:34.873 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2158786 00:03:34.873 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2158786 ']' 00:03:34.873 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2158786 00:03:34.873 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:34.873 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:34.873 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2158786 00:03:35.132 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:35.132 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:35.132 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2158786' 00:03:35.132 killing process with pid 2158786 00:03:35.132 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2158786 00:03:35.132 00:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2158786 00:03:35.391 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2158926 00:03:35.392 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.392 00:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2158926 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2158926 ']' 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2158926 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2158926 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2158926' 00:03:40.668 killing process with pid 2158926 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2158926 00:03:40.668 00:07:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2158926 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:40.927 00:03:40.927 real 0m6.628s 00:03:40.927 user 0m6.225s 00:03:40.927 sys 0m0.685s 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.927 ************************************ 00:03:40.927 END TEST skip_rpc_with_json 00:03:40.927 ************************************ 00:03:40.927 00:07:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:40.927 00:07:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:40.927 00:07:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.927 00:07:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.927 00:07:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.927 ************************************ 00:03:40.927 START TEST skip_rpc_with_delay 00:03:40.927 ************************************ 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:40.927 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.185 [2024-07-16 00:07:32.553769] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:41.185 [2024-07-16 00:07:32.553884] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:41.185 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:41.185 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:41.185 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:41.185 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:41.185 00:03:41.185 real 0m0.072s 00:03:41.185 user 0m0.041s 00:03:41.185 sys 0m0.030s 00:03:41.185 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.186 00:07:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:41.186 ************************************ 00:03:41.186 END TEST skip_rpc_with_delay 00:03:41.186 ************************************ 00:03:41.186 00:07:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:41.186 00:07:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:41.186 00:07:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:41.186 00:07:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:41.186 00:07:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.186 00:07:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.186 00:07:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.186 ************************************ 00:03:41.186 START TEST exit_on_failed_rpc_init 00:03:41.186 ************************************ 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2159644 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2159644 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2159644 ']' 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:41.186 00:07:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.186 [2024-07-16 00:07:32.674165] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:41.186 [2024-07-16 00:07:32.674268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159644 ] 00:03:41.186 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.186 [2024-07-16 00:07:32.731752] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.446 [2024-07-16 00:07:32.840449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:41.705 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.705 [2024-07-16 00:07:33.158147] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:41.705 [2024-07-16 00:07:33.158263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159769 ] 00:03:41.705 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.705 [2024-07-16 00:07:33.219627] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.964 [2024-07-16 00:07:33.337528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:41.964 [2024-07-16 00:07:33.337677] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:41.964 [2024-07-16 00:07:33.337696] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:41.964 [2024-07-16 00:07:33.337710] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2159644 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2159644 ']' 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2159644 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2159644 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2159644' 00:03:41.964 killing process with pid 2159644 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2159644 00:03:41.964 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2159644 00:03:42.532 00:03:42.532 real 0m1.342s 00:03:42.532 user 0m1.523s 00:03:42.532 sys 0m0.445s 00:03:42.532 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.532 00:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:42.532 ************************************ 00:03:42.532 END TEST exit_on_failed_rpc_init 00:03:42.532 ************************************ 00:03:42.532 00:07:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.532 00:07:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.532 00:03:42.532 real 0m13.792s 00:03:42.532 user 0m13.081s 00:03:42.532 sys 0m1.639s 00:03:42.532 00:07:33 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.532 00:07:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.532 ************************************ 00:03:42.532 END TEST skip_rpc 00:03:42.532 ************************************ 00:03:42.532 00:07:34 -- common/autotest_common.sh@1142 -- # return 0 00:03:42.532 00:07:34 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.532 00:07:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.532 00:07:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.532 00:07:34 -- common/autotest_common.sh@10 -- # set +x 00:03:42.532 ************************************ 00:03:42.532 START TEST rpc_client 00:03:42.532 ************************************ 00:03:42.532 00:07:34 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.532 * Looking for test storage... 00:03:42.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:42.532 00:07:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:42.532 OK 00:03:42.532 00:07:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:42.532 00:03:42.532 real 0m0.068s 00:03:42.532 user 0m0.029s 00:03:42.532 sys 0m0.044s 00:03:42.532 00:07:34 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.532 00:07:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:42.532 ************************************ 00:03:42.532 END TEST rpc_client 00:03:42.532 ************************************ 00:03:42.532 00:07:34 -- common/autotest_common.sh@1142 -- # return 0 00:03:42.532 00:07:34 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.532 00:07:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.532 00:07:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.532 00:07:34 -- common/autotest_common.sh@10 -- # set +x 00:03:42.791 ************************************ 00:03:42.791 START TEST json_config 00:03:42.791 ************************************ 00:03:42.791 00:07:34 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.791 00:07:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.791 00:07:34 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.791 00:07:34 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.791 00:07:34 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.791 00:07:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.791 00:07:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.791 00:07:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.791 00:07:34 json_config -- paths/export.sh@5 -- # export PATH 00:03:42.791 00:07:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@47 -- # : 0 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:42.791 00:07:34 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:42.791 00:07:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:42.791 00:07:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:42.792 INFO: JSON configuration test init 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.792 00:07:34 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:42.792 00:07:34 json_config -- json_config/common.sh@9 -- # local app=target 00:03:42.792 00:07:34 json_config -- json_config/common.sh@10 -- # shift 00:03:42.792 00:07:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:42.792 00:07:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:42.792 00:07:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:42.792 00:07:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.792 00:07:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.792 00:07:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2160013 00:03:42.792 00:07:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:42.792 00:07:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:42.792 Waiting for target to run... 00:03:42.792 00:07:34 json_config -- json_config/common.sh@25 -- # waitforlisten 2160013 /var/tmp/spdk_tgt.sock 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 2160013 ']' 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:42.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:42.792 00:07:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.792 [2024-07-16 00:07:34.259480] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:42.792 [2024-07-16 00:07:34.259581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160013 ] 00:03:42.792 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.052 [2024-07-16 00:07:34.615435] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.311 [2024-07-16 00:07:34.704095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.569 00:07:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:43.569 00:07:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:43.569 00:07:35 json_config -- json_config/common.sh@26 -- # echo '' 00:03:43.569 00:03:43.569 00:07:35 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:43.569 00:07:35 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:43.569 00:07:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.569 00:07:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.828 00:07:35 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:43.828 00:07:35 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:43.828 00:07:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:43.828 00:07:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.828 00:07:35 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:43.828 00:07:35 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:43.828 00:07:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:47.150 00:07:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.150 00:07:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:47.150 00:07:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:47.150 00:07:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.150 00:07:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:47.150 00:07:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.150 00:07:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:47.150 00:07:38 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.150 00:07:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.407 MallocForNvmf0 00:03:47.407 00:07:38 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:47.407 00:07:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:47.664 MallocForNvmf1 00:03:47.664 00:07:39 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:47.664 00:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:47.922 [2024-07-16 00:07:39.363070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:47.922 00:07:39 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:47.922 00:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.179 00:07:39 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.179 00:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.437 00:07:39 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.437 00:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.694 00:07:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.694 00:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.952 [2024-07-16 00:07:40.370479] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:48.952 00:07:40 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:48.952 00:07:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:48.952 00:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.952 00:07:40 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:48.952 00:07:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:48.952 00:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.952 00:07:40 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:48.952 00:07:40 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:48.952 00:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.210 MallocBdevForConfigChangeCheck 00:03:49.210 00:07:40 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:49.210 00:07:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:49.210 00:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.210 00:07:40 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:49.210 00:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.469 00:07:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:49.469 INFO: shutting down applications... 00:03:49.469 00:07:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:49.469 00:07:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:49.469 00:07:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:49.469 00:07:41 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:51.374 Calling clear_iscsi_subsystem 00:03:51.374 Calling clear_nvmf_subsystem 00:03:51.374 Calling clear_nbd_subsystem 00:03:51.374 Calling clear_ublk_subsystem 00:03:51.374 Calling clear_vhost_blk_subsystem 00:03:51.374 Calling clear_vhost_scsi_subsystem 00:03:51.374 Calling clear_bdev_subsystem 00:03:51.374 00:07:42 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:51.374 00:07:42 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:51.374 00:07:42 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:51.374 00:07:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:51.375 00:07:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:51.375 00:07:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:51.635 00:07:43 json_config -- json_config/json_config.sh@345 -- # break 00:03:51.635 00:07:43 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:51.635 00:07:43 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:51.635 00:07:43 json_config -- json_config/common.sh@31 -- # local app=target 00:03:51.635 00:07:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:51.635 00:07:43 json_config -- json_config/common.sh@35 -- # [[ -n 2160013 ]] 00:03:51.635 00:07:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2160013 00:03:51.635 00:07:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:51.635 00:07:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:51.635 00:07:43 json_config -- json_config/common.sh@41 -- # kill -0 2160013 00:03:51.635 00:07:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:52.204 00:07:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:52.204 00:07:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:52.204 00:07:43 json_config -- json_config/common.sh@41 -- # kill -0 2160013 00:03:52.204 00:07:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:52.204 00:07:43 json_config -- json_config/common.sh@43 -- # break 00:03:52.204 00:07:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:52.204 00:07:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:52.204 SPDK target shutdown done 00:03:52.204 00:07:43 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:52.204 INFO: relaunching applications... 00:03:52.204 00:07:43 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.204 00:07:43 json_config -- json_config/common.sh@9 -- # local app=target 00:03:52.204 00:07:43 json_config -- json_config/common.sh@10 -- # shift 00:03:52.204 00:07:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:52.204 00:07:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:52.204 00:07:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:52.204 00:07:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.204 00:07:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.204 00:07:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2161211 00:03:52.204 00:07:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.204 00:07:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:52.204 Waiting for target to run... 00:03:52.204 00:07:43 json_config -- json_config/common.sh@25 -- # waitforlisten 2161211 /var/tmp/spdk_tgt.sock 00:03:52.204 00:07:43 json_config -- common/autotest_common.sh@829 -- # '[' -z 2161211 ']' 00:03:52.204 00:07:43 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:52.204 00:07:43 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:52.204 00:07:43 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:52.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:52.204 00:07:43 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:52.204 00:07:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.204 [2024-07-16 00:07:43.686261] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:52.204 [2024-07-16 00:07:43.686355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161211 ] 00:03:52.204 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.773 [2024-07-16 00:07:44.178577] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.773 [2024-07-16 00:07:44.286052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.062 [2024-07-16 00:07:47.328968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.062 [2024-07-16 00:07:47.361400] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:56.626 00:07:48 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.626 00:07:48 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:56.626 00:07:48 json_config -- json_config/common.sh@26 -- # echo '' 00:03:56.626 00:03:56.626 00:07:48 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:56.626 00:07:48 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:56.626 INFO: Checking if target configuration is the same... 00:03:56.626 00:07:48 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.626 00:07:48 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:56.626 00:07:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.626 + '[' 2 -ne 2 ']' 00:03:56.626 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.626 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.626 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.626 +++ basename /dev/fd/62 00:03:56.626 ++ mktemp /tmp/62.XXX 00:03:56.626 + tmp_file_1=/tmp/62.8xP 00:03:56.626 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.626 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.626 + tmp_file_2=/tmp/spdk_tgt_config.json.wkp 00:03:56.626 + ret=0 00:03:56.626 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.193 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.193 + diff -u /tmp/62.8xP /tmp/spdk_tgt_config.json.wkp 00:03:57.193 + echo 'INFO: JSON config files are the same' 00:03:57.193 INFO: JSON config files are the same 00:03:57.193 + rm /tmp/62.8xP /tmp/spdk_tgt_config.json.wkp 00:03:57.193 + exit 0 00:03:57.193 00:07:48 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:57.193 00:07:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:57.193 INFO: changing configuration and checking if this can be detected... 00:03:57.193 00:07:48 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:57.193 00:07:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:57.453 00:07:48 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.453 00:07:48 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:57.453 00:07:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.453 + '[' 2 -ne 2 ']' 00:03:57.453 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:57.453 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:57.453 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.453 +++ basename /dev/fd/62 00:03:57.453 ++ mktemp /tmp/62.XXX 00:03:57.453 + tmp_file_1=/tmp/62.Thl 00:03:57.453 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.453 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:57.453 + tmp_file_2=/tmp/spdk_tgt_config.json.tX5 00:03:57.453 + ret=0 00:03:57.453 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.712 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.712 + diff -u /tmp/62.Thl /tmp/spdk_tgt_config.json.tX5 00:03:57.712 + ret=1 00:03:57.712 + echo '=== Start of file: /tmp/62.Thl ===' 00:03:57.712 + cat /tmp/62.Thl 00:03:57.712 + echo '=== End of file: /tmp/62.Thl ===' 00:03:57.712 + echo '' 00:03:57.712 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tX5 ===' 00:03:57.712 + cat /tmp/spdk_tgt_config.json.tX5 00:03:57.712 + echo '=== End of file: /tmp/spdk_tgt_config.json.tX5 ===' 00:03:57.712 + echo '' 00:03:57.712 + rm /tmp/62.Thl /tmp/spdk_tgt_config.json.tX5 00:03:57.712 + exit 1 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:57.712 INFO: configuration change detected. 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:57.712 00:07:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.712 00:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@317 -- # [[ -n 2161211 ]] 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:57.712 00:07:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.712 00:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@193 -- # uname -s 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:57.712 00:07:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:57.712 00:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.712 00:07:49 json_config -- json_config/json_config.sh@323 -- # killprocess 2161211 00:03:57.712 00:07:49 json_config -- common/autotest_common.sh@948 -- # '[' -z 2161211 ']' 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@952 -- # kill -0 2161211 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@953 -- # uname 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2161211 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2161211' 00:03:57.713 killing process with pid 2161211 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@967 -- # kill 2161211 00:03:57.713 00:07:49 json_config -- common/autotest_common.sh@972 -- # wait 2161211 00:03:59.633 00:07:51 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.633 00:07:51 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:59.633 00:07:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.633 00:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.633 00:07:51 json_config -- json_config/json_config.sh@328 -- # return 0 00:03:59.633 00:07:51 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:59.633 INFO: Success 00:03:59.633 00:03:59.633 real 0m16.892s 00:03:59.633 user 0m18.933s 00:03:59.633 sys 0m2.076s 00:03:59.633 00:07:51 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.633 00:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.633 ************************************ 00:03:59.633 END TEST json_config 00:03:59.633 ************************************ 00:03:59.633 00:07:51 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.633 00:07:51 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:59.633 00:07:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.633 00:07:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.633 00:07:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.633 ************************************ 00:03:59.633 START TEST json_config_extra_key 00:03:59.633 ************************************ 00:03:59.633 00:07:51 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:59.633 00:07:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.633 00:07:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.633 00:07:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.633 00:07:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.633 00:07:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.633 00:07:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.633 00:07:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:59.633 00:07:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:59.633 00:07:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:59.633 INFO: launching applications... 00:03:59.633 00:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2162249 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:59.633 Waiting for target to run... 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:59.633 00:07:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2162249 /var/tmp/spdk_tgt.sock 00:03:59.633 00:07:51 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2162249 ']' 00:03:59.633 00:07:51 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:59.633 00:07:51 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:59.633 00:07:51 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:59.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:59.633 00:07:51 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:59.633 00:07:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:59.633 [2024-07-16 00:07:51.197906] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:03:59.633 [2024-07-16 00:07:51.197990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162249 ] 00:03:59.633 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.201 [2024-07-16 00:07:51.541867] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.201 [2024-07-16 00:07:51.630458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.769 00:07:52 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.769 00:07:52 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:00.769 00:04:00.769 00:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:00.769 INFO: shutting down applications... 00:04:00.769 00:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2162249 ]] 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2162249 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2162249 00:04:00.769 00:07:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.027 00:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.027 00:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.027 00:07:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2162249 00:04:01.027 00:07:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.675 00:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.675 00:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.675 00:07:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2162249 00:04:01.675 00:07:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:01.675 00:07:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:01.675 00:07:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:01.675 00:07:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:01.675 SPDK target shutdown done 00:04:01.675 00:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:01.675 Success 00:04:01.675 00:04:01.675 real 0m2.033s 00:04:01.675 user 0m1.542s 00:04:01.675 sys 0m0.436s 00:04:01.675 00:07:53 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.675 00:07:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:01.675 ************************************ 00:04:01.675 END TEST json_config_extra_key 00:04:01.675 ************************************ 00:04:01.675 00:07:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.675 00:07:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:01.675 00:07:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.675 00:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.675 00:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.675 ************************************ 00:04:01.675 START TEST alias_rpc 00:04:01.675 ************************************ 00:04:01.675 00:07:53 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:01.675 * Looking for test storage... 00:04:01.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:01.675 00:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:01.675 00:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2162566 00:04:01.675 00:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.675 00:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2162566 00:04:01.675 00:07:53 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2162566 ']' 00:04:01.675 00:07:53 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.675 00:07:53 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.675 00:07:53 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.675 00:07:53 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.675 00:07:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.675 [2024-07-16 00:07:53.283838] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:01.675 [2024-07-16 00:07:53.283946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162566 ] 00:04:01.933 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.933 [2024-07-16 00:07:53.340831] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.933 [2024-07-16 00:07:53.445398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.190 00:07:53 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:02.190 00:07:53 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:02.190 00:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:02.447 00:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2162566 00:04:02.447 00:07:53 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2162566 ']' 00:04:02.447 00:07:53 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2162566 00:04:02.447 00:07:53 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:02.447 00:07:53 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:02.447 00:07:53 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2162566 00:04:02.447 00:07:54 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:02.447 00:07:54 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:02.447 00:07:54 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2162566' 00:04:02.447 killing process with pid 2162566 00:04:02.447 00:07:54 alias_rpc -- common/autotest_common.sh@967 -- # kill 2162566 00:04:02.447 00:07:54 alias_rpc -- common/autotest_common.sh@972 -- # wait 2162566 00:04:03.014 00:04:03.014 real 0m1.279s 00:04:03.014 user 0m1.369s 00:04:03.014 sys 0m0.412s 00:04:03.014 00:07:54 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.014 00:07:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.014 ************************************ 00:04:03.014 END TEST alias_rpc 00:04:03.014 ************************************ 00:04:03.014 00:07:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:03.014 00:07:54 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:03.014 00:07:54 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.014 00:07:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.014 00:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.014 00:07:54 -- common/autotest_common.sh@10 -- # set +x 00:04:03.014 ************************************ 00:04:03.014 START TEST spdkcli_tcp 00:04:03.014 ************************************ 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.014 * Looking for test storage... 00:04:03.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2162754 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:03.014 00:07:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2162754 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2162754 ']' 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:03.014 00:07:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.014 [2024-07-16 00:07:54.618774] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:03.014 [2024-07-16 00:07:54.618855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162754 ] 00:04:03.271 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.271 [2024-07-16 00:07:54.680323] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.271 [2024-07-16 00:07:54.801905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.271 [2024-07-16 00:07:54.801915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.529 00:07:55 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.529 00:07:55 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:03.529 00:07:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2162771 00:04:03.529 00:07:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:03.529 00:07:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:03.787 [ 00:04:03.787 "bdev_malloc_delete", 00:04:03.787 "bdev_malloc_create", 00:04:03.787 "bdev_null_resize", 00:04:03.787 "bdev_null_delete", 00:04:03.787 "bdev_null_create", 00:04:03.787 "bdev_nvme_cuse_unregister", 00:04:03.787 "bdev_nvme_cuse_register", 00:04:03.787 "bdev_opal_new_user", 00:04:03.787 "bdev_opal_set_lock_state", 00:04:03.787 "bdev_opal_delete", 00:04:03.787 "bdev_opal_get_info", 00:04:03.787 "bdev_opal_create", 00:04:03.787 "bdev_nvme_opal_revert", 00:04:03.787 "bdev_nvme_opal_init", 00:04:03.787 "bdev_nvme_send_cmd", 00:04:03.787 "bdev_nvme_get_path_iostat", 00:04:03.787 "bdev_nvme_get_mdns_discovery_info", 00:04:03.787 "bdev_nvme_stop_mdns_discovery", 00:04:03.787 "bdev_nvme_start_mdns_discovery", 00:04:03.787 "bdev_nvme_set_multipath_policy", 00:04:03.787 "bdev_nvme_set_preferred_path", 00:04:03.787 "bdev_nvme_get_io_paths", 00:04:03.787 "bdev_nvme_remove_error_injection", 00:04:03.787 "bdev_nvme_add_error_injection", 00:04:03.787 "bdev_nvme_get_discovery_info", 00:04:03.787 "bdev_nvme_stop_discovery", 00:04:03.787 "bdev_nvme_start_discovery", 00:04:03.787 "bdev_nvme_get_controller_health_info", 00:04:03.787 "bdev_nvme_disable_controller", 00:04:03.787 "bdev_nvme_enable_controller", 00:04:03.787 "bdev_nvme_reset_controller", 00:04:03.787 "bdev_nvme_get_transport_statistics", 00:04:03.787 "bdev_nvme_apply_firmware", 00:04:03.787 "bdev_nvme_detach_controller", 00:04:03.787 "bdev_nvme_get_controllers", 00:04:03.787 "bdev_nvme_attach_controller", 00:04:03.787 "bdev_nvme_set_hotplug", 00:04:03.787 "bdev_nvme_set_options", 00:04:03.787 "bdev_passthru_delete", 00:04:03.787 "bdev_passthru_create", 00:04:03.787 "bdev_lvol_set_parent_bdev", 00:04:03.787 "bdev_lvol_set_parent", 00:04:03.787 "bdev_lvol_check_shallow_copy", 00:04:03.787 "bdev_lvol_start_shallow_copy", 00:04:03.787 "bdev_lvol_grow_lvstore", 00:04:03.787 "bdev_lvol_get_lvols", 00:04:03.787 "bdev_lvol_get_lvstores", 00:04:03.787 "bdev_lvol_delete", 00:04:03.787 "bdev_lvol_set_read_only", 00:04:03.787 "bdev_lvol_resize", 00:04:03.787 "bdev_lvol_decouple_parent", 00:04:03.787 "bdev_lvol_inflate", 00:04:03.787 "bdev_lvol_rename", 00:04:03.787 "bdev_lvol_clone_bdev", 00:04:03.787 "bdev_lvol_clone", 00:04:03.787 "bdev_lvol_snapshot", 00:04:03.787 "bdev_lvol_create", 00:04:03.787 "bdev_lvol_delete_lvstore", 00:04:03.787 "bdev_lvol_rename_lvstore", 00:04:03.787 "bdev_lvol_create_lvstore", 00:04:03.787 "bdev_raid_set_options", 00:04:03.787 "bdev_raid_remove_base_bdev", 00:04:03.787 "bdev_raid_add_base_bdev", 00:04:03.787 "bdev_raid_delete", 00:04:03.787 "bdev_raid_create", 00:04:03.787 "bdev_raid_get_bdevs", 00:04:03.787 "bdev_error_inject_error", 00:04:03.787 "bdev_error_delete", 00:04:03.787 "bdev_error_create", 00:04:03.787 "bdev_split_delete", 00:04:03.787 "bdev_split_create", 00:04:03.787 "bdev_delay_delete", 00:04:03.787 "bdev_delay_create", 00:04:03.787 "bdev_delay_update_latency", 00:04:03.787 "bdev_zone_block_delete", 00:04:03.787 "bdev_zone_block_create", 00:04:03.787 "blobfs_create", 00:04:03.787 "blobfs_detect", 00:04:03.787 "blobfs_set_cache_size", 00:04:03.787 "bdev_aio_delete", 00:04:03.787 "bdev_aio_rescan", 00:04:03.787 "bdev_aio_create", 00:04:03.787 "bdev_ftl_set_property", 00:04:03.787 "bdev_ftl_get_properties", 00:04:03.787 "bdev_ftl_get_stats", 00:04:03.787 "bdev_ftl_unmap", 00:04:03.787 "bdev_ftl_unload", 00:04:03.787 "bdev_ftl_delete", 00:04:03.787 "bdev_ftl_load", 00:04:03.787 "bdev_ftl_create", 00:04:03.787 "bdev_virtio_attach_controller", 00:04:03.787 "bdev_virtio_scsi_get_devices", 00:04:03.787 "bdev_virtio_detach_controller", 00:04:03.787 "bdev_virtio_blk_set_hotplug", 00:04:03.787 "bdev_iscsi_delete", 00:04:03.787 "bdev_iscsi_create", 00:04:03.787 "bdev_iscsi_set_options", 00:04:03.787 "accel_error_inject_error", 00:04:03.787 "ioat_scan_accel_module", 00:04:03.787 "dsa_scan_accel_module", 00:04:03.787 "iaa_scan_accel_module", 00:04:03.787 "vfu_virtio_create_scsi_endpoint", 00:04:03.787 "vfu_virtio_scsi_remove_target", 00:04:03.787 "vfu_virtio_scsi_add_target", 00:04:03.787 "vfu_virtio_create_blk_endpoint", 00:04:03.787 "vfu_virtio_delete_endpoint", 00:04:03.787 "keyring_file_remove_key", 00:04:03.787 "keyring_file_add_key", 00:04:03.787 "keyring_linux_set_options", 00:04:03.787 "iscsi_get_histogram", 00:04:03.787 "iscsi_enable_histogram", 00:04:03.787 "iscsi_set_options", 00:04:03.787 "iscsi_get_auth_groups", 00:04:03.787 "iscsi_auth_group_remove_secret", 00:04:03.787 "iscsi_auth_group_add_secret", 00:04:03.787 "iscsi_delete_auth_group", 00:04:03.787 "iscsi_create_auth_group", 00:04:03.787 "iscsi_set_discovery_auth", 00:04:03.787 "iscsi_get_options", 00:04:03.787 "iscsi_target_node_request_logout", 00:04:03.787 "iscsi_target_node_set_redirect", 00:04:03.787 "iscsi_target_node_set_auth", 00:04:03.787 "iscsi_target_node_add_lun", 00:04:03.787 "iscsi_get_stats", 00:04:03.787 "iscsi_get_connections", 00:04:03.787 "iscsi_portal_group_set_auth", 00:04:03.787 "iscsi_start_portal_group", 00:04:03.787 "iscsi_delete_portal_group", 00:04:03.787 "iscsi_create_portal_group", 00:04:03.787 "iscsi_get_portal_groups", 00:04:03.787 "iscsi_delete_target_node", 00:04:03.787 "iscsi_target_node_remove_pg_ig_maps", 00:04:03.787 "iscsi_target_node_add_pg_ig_maps", 00:04:03.787 "iscsi_create_target_node", 00:04:03.787 "iscsi_get_target_nodes", 00:04:03.788 "iscsi_delete_initiator_group", 00:04:03.788 "iscsi_initiator_group_remove_initiators", 00:04:03.788 "iscsi_initiator_group_add_initiators", 00:04:03.788 "iscsi_create_initiator_group", 00:04:03.788 "iscsi_get_initiator_groups", 00:04:03.788 "nvmf_set_crdt", 00:04:03.788 "nvmf_set_config", 00:04:03.788 "nvmf_set_max_subsystems", 00:04:03.788 "nvmf_stop_mdns_prr", 00:04:03.788 "nvmf_publish_mdns_prr", 00:04:03.788 "nvmf_subsystem_get_listeners", 00:04:03.788 "nvmf_subsystem_get_qpairs", 00:04:03.788 "nvmf_subsystem_get_controllers", 00:04:03.788 "nvmf_get_stats", 00:04:03.788 "nvmf_get_transports", 00:04:03.788 "nvmf_create_transport", 00:04:03.788 "nvmf_get_targets", 00:04:03.788 "nvmf_delete_target", 00:04:03.788 "nvmf_create_target", 00:04:03.788 "nvmf_subsystem_allow_any_host", 00:04:03.788 "nvmf_subsystem_remove_host", 00:04:03.788 "nvmf_subsystem_add_host", 00:04:03.788 "nvmf_ns_remove_host", 00:04:03.788 "nvmf_ns_add_host", 00:04:03.788 "nvmf_subsystem_remove_ns", 00:04:03.788 "nvmf_subsystem_add_ns", 00:04:03.788 "nvmf_subsystem_listener_set_ana_state", 00:04:03.788 "nvmf_discovery_get_referrals", 00:04:03.788 "nvmf_discovery_remove_referral", 00:04:03.788 "nvmf_discovery_add_referral", 00:04:03.788 "nvmf_subsystem_remove_listener", 00:04:03.788 "nvmf_subsystem_add_listener", 00:04:03.788 "nvmf_delete_subsystem", 00:04:03.788 "nvmf_create_subsystem", 00:04:03.788 "nvmf_get_subsystems", 00:04:03.788 "env_dpdk_get_mem_stats", 00:04:03.788 "nbd_get_disks", 00:04:03.788 "nbd_stop_disk", 00:04:03.788 "nbd_start_disk", 00:04:03.788 "ublk_recover_disk", 00:04:03.788 "ublk_get_disks", 00:04:03.788 "ublk_stop_disk", 00:04:03.788 "ublk_start_disk", 00:04:03.788 "ublk_destroy_target", 00:04:03.788 "ublk_create_target", 00:04:03.788 "virtio_blk_create_transport", 00:04:03.788 "virtio_blk_get_transports", 00:04:03.788 "vhost_controller_set_coalescing", 00:04:03.788 "vhost_get_controllers", 00:04:03.788 "vhost_delete_controller", 00:04:03.788 "vhost_create_blk_controller", 00:04:03.788 "vhost_scsi_controller_remove_target", 00:04:03.788 "vhost_scsi_controller_add_target", 00:04:03.788 "vhost_start_scsi_controller", 00:04:03.788 "vhost_create_scsi_controller", 00:04:03.788 "thread_set_cpumask", 00:04:03.788 "framework_get_governor", 00:04:03.788 "framework_get_scheduler", 00:04:03.788 "framework_set_scheduler", 00:04:03.788 "framework_get_reactors", 00:04:03.788 "thread_get_io_channels", 00:04:03.788 "thread_get_pollers", 00:04:03.788 "thread_get_stats", 00:04:03.788 "framework_monitor_context_switch", 00:04:03.788 "spdk_kill_instance", 00:04:03.788 "log_enable_timestamps", 00:04:03.788 "log_get_flags", 00:04:03.788 "log_clear_flag", 00:04:03.788 "log_set_flag", 00:04:03.788 "log_get_level", 00:04:03.788 "log_set_level", 00:04:03.788 "log_get_print_level", 00:04:03.788 "log_set_print_level", 00:04:03.788 "framework_enable_cpumask_locks", 00:04:03.788 "framework_disable_cpumask_locks", 00:04:03.788 "framework_wait_init", 00:04:03.788 "framework_start_init", 00:04:03.788 "scsi_get_devices", 00:04:03.788 "bdev_get_histogram", 00:04:03.788 "bdev_enable_histogram", 00:04:03.788 "bdev_set_qos_limit", 00:04:03.788 "bdev_set_qd_sampling_period", 00:04:03.788 "bdev_get_bdevs", 00:04:03.788 "bdev_reset_iostat", 00:04:03.788 "bdev_get_iostat", 00:04:03.788 "bdev_examine", 00:04:03.788 "bdev_wait_for_examine", 00:04:03.788 "bdev_set_options", 00:04:03.788 "notify_get_notifications", 00:04:03.788 "notify_get_types", 00:04:03.788 "accel_get_stats", 00:04:03.788 "accel_set_options", 00:04:03.788 "accel_set_driver", 00:04:03.788 "accel_crypto_key_destroy", 00:04:03.788 "accel_crypto_keys_get", 00:04:03.788 "accel_crypto_key_create", 00:04:03.788 "accel_assign_opc", 00:04:03.788 "accel_get_module_info", 00:04:03.788 "accel_get_opc_assignments", 00:04:03.788 "vmd_rescan", 00:04:03.788 "vmd_remove_device", 00:04:03.788 "vmd_enable", 00:04:03.788 "sock_get_default_impl", 00:04:03.788 "sock_set_default_impl", 00:04:03.788 "sock_impl_set_options", 00:04:03.788 "sock_impl_get_options", 00:04:03.788 "iobuf_get_stats", 00:04:03.788 "iobuf_set_options", 00:04:03.788 "keyring_get_keys", 00:04:03.788 "framework_get_pci_devices", 00:04:03.788 "framework_get_config", 00:04:03.788 "framework_get_subsystems", 00:04:03.788 "vfu_tgt_set_base_path", 00:04:03.788 "trace_get_info", 00:04:03.788 "trace_get_tpoint_group_mask", 00:04:03.788 "trace_disable_tpoint_group", 00:04:03.788 "trace_enable_tpoint_group", 00:04:03.788 "trace_clear_tpoint_mask", 00:04:03.788 "trace_set_tpoint_mask", 00:04:03.788 "spdk_get_version", 00:04:03.788 "rpc_get_methods" 00:04:03.788 ] 00:04:03.788 00:07:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.788 00:07:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:03.788 00:07:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2162754 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2162754 ']' 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2162754 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2162754 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2162754' 00:04:03.788 killing process with pid 2162754 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2162754 00:04:03.788 00:07:55 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2162754 00:04:04.355 00:04:04.355 real 0m1.333s 00:04:04.355 user 0m2.354s 00:04:04.355 sys 0m0.440s 00:04:04.355 00:07:55 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.355 00:07:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.355 ************************************ 00:04:04.355 END TEST spdkcli_tcp 00:04:04.355 ************************************ 00:04:04.355 00:07:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.355 00:07:55 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:04.355 00:07:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.355 00:07:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.355 00:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.355 ************************************ 00:04:04.355 START TEST dpdk_mem_utility 00:04:04.355 ************************************ 00:04:04.355 00:07:55 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:04.355 * Looking for test storage... 00:04:04.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:04.355 00:07:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:04.355 00:07:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2162960 00:04:04.355 00:07:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.355 00:07:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2162960 00:04:04.355 00:07:55 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2162960 ']' 00:04:04.355 00:07:55 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.355 00:07:55 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.355 00:07:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.355 00:07:55 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.355 00:07:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:04.613 [2024-07-16 00:07:55.994676] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:04.613 [2024-07-16 00:07:55.994770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162960 ] 00:04:04.613 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.613 [2024-07-16 00:07:56.053994] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.613 [2024-07-16 00:07:56.159323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.872 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:04.872 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:04.872 00:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:04.872 00:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:04.872 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.872 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:04.872 { 00:04:04.872 "filename": "/tmp/spdk_mem_dump.txt" 00:04:04.872 } 00:04:04.872 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.872 00:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.132 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:05.132 1 heaps totaling size 814.000000 MiB 00:04:05.132 size: 814.000000 MiB heap id: 0 00:04:05.132 end heaps---------- 00:04:05.132 8 mempools totaling size 598.116089 MiB 00:04:05.132 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:05.132 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:05.132 size: 84.521057 MiB name: bdev_io_2162960 00:04:05.132 size: 51.011292 MiB name: evtpool_2162960 00:04:05.132 size: 50.003479 MiB name: msgpool_2162960 00:04:05.132 size: 21.763794 MiB name: PDU_Pool 00:04:05.132 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:05.132 size: 0.026123 MiB name: Session_Pool 00:04:05.132 end mempools------- 00:04:05.132 6 memzones totaling size 4.142822 MiB 00:04:05.132 size: 1.000366 MiB name: RG_ring_0_2162960 00:04:05.132 size: 1.000366 MiB name: RG_ring_1_2162960 00:04:05.132 size: 1.000366 MiB name: RG_ring_4_2162960 00:04:05.132 size: 1.000366 MiB name: RG_ring_5_2162960 00:04:05.132 size: 0.125366 MiB name: RG_ring_2_2162960 00:04:05.132 size: 0.015991 MiB name: RG_ring_3_2162960 00:04:05.132 end memzones------- 00:04:05.132 00:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:05.132 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:05.132 list of free elements. size: 12.519348 MiB 00:04:05.132 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:05.132 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:05.132 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:05.132 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:05.132 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:05.132 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:05.132 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:05.132 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:05.132 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:05.132 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:05.132 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:05.132 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:05.132 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:05.132 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:05.132 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:05.132 list of standard malloc elements. size: 199.218079 MiB 00:04:05.132 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:05.132 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:05.132 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:05.132 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:05.132 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:05.132 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:05.132 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:05.132 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:05.132 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:05.132 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:05.132 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:05.132 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:05.132 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:05.132 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:05.132 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:05.132 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:05.132 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:05.132 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:05.132 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:05.132 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:05.132 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:05.133 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:05.133 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:05.133 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:05.133 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:05.133 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:05.133 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:05.133 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:05.133 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:05.133 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:05.133 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:05.133 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:05.133 list of memzone associated elements. size: 602.262573 MiB 00:04:05.133 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:05.133 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:05.133 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:05.133 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:05.133 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:05.133 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2162960_0 00:04:05.133 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:05.133 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2162960_0 00:04:05.133 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:05.133 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2162960_0 00:04:05.133 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:05.133 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:05.133 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:05.133 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:05.133 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:05.133 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2162960 00:04:05.133 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:05.133 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2162960 00:04:05.133 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:05.133 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2162960 00:04:05.133 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:05.133 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:05.133 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:05.133 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:05.133 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:05.133 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:05.133 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:05.133 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:05.133 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:05.133 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2162960 00:04:05.133 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:05.133 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2162960 00:04:05.133 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:05.133 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2162960 00:04:05.133 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:05.133 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2162960 00:04:05.133 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:05.133 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2162960 00:04:05.133 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:05.133 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:05.133 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:05.133 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:05.133 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:05.133 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:05.133 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:05.133 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2162960 00:04:05.133 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:05.133 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:05.133 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:05.133 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:05.133 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:05.133 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2162960 00:04:05.133 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:05.133 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:05.133 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:05.133 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2162960 00:04:05.133 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:05.133 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2162960 00:04:05.133 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:05.133 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:05.133 00:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:05.133 00:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2162960 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2162960 ']' 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2162960 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2162960 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2162960' 00:04:05.133 killing process with pid 2162960 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2162960 00:04:05.133 00:07:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2162960 00:04:05.701 00:04:05.701 real 0m1.149s 00:04:05.701 user 0m1.112s 00:04:05.701 sys 0m0.406s 00:04:05.701 00:07:57 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.701 00:07:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.701 ************************************ 00:04:05.701 END TEST dpdk_mem_utility 00:04:05.701 ************************************ 00:04:05.701 00:07:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:05.701 00:07:57 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:05.701 00:07:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.701 00:07:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.701 00:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:05.701 ************************************ 00:04:05.701 START TEST event 00:04:05.701 ************************************ 00:04:05.701 00:07:57 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:05.701 * Looking for test storage... 00:04:05.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:05.701 00:07:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:05.701 00:07:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:05.701 00:07:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.701 00:07:57 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:05.701 00:07:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.701 00:07:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.701 ************************************ 00:04:05.701 START TEST event_perf 00:04:05.701 ************************************ 00:04:05.701 00:07:57 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.701 Running I/O for 1 seconds...[2024-07-16 00:07:57.171734] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:05.701 [2024-07-16 00:07:57.171788] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163152 ] 00:04:05.701 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.701 [2024-07-16 00:07:57.228599] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:05.960 [2024-07-16 00:07:57.342342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.960 [2024-07-16 00:07:57.342405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:05.960 [2024-07-16 00:07:57.342473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:05.960 [2024-07-16 00:07:57.342476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.898 Running I/O for 1 seconds... 00:04:06.898 lcore 0: 227225 00:04:06.898 lcore 1: 227224 00:04:06.898 lcore 2: 227224 00:04:06.898 lcore 3: 227224 00:04:06.898 done. 00:04:06.898 00:04:06.898 real 0m1.306s 00:04:06.898 user 0m4.220s 00:04:06.898 sys 0m0.081s 00:04:06.898 00:07:58 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.898 00:07:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:06.898 ************************************ 00:04:06.898 END TEST event_perf 00:04:06.898 ************************************ 00:04:06.898 00:07:58 event -- common/autotest_common.sh@1142 -- # return 0 00:04:06.898 00:07:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:06.898 00:07:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:06.898 00:07:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.898 00:07:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.157 ************************************ 00:04:07.157 START TEST event_reactor 00:04:07.157 ************************************ 00:04:07.157 00:07:58 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.157 [2024-07-16 00:07:58.525468] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:07.157 [2024-07-16 00:07:58.525534] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163323 ] 00:04:07.157 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.157 [2024-07-16 00:07:58.590447] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.157 [2024-07-16 00:07:58.713075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.534 test_start 00:04:08.534 oneshot 00:04:08.534 tick 100 00:04:08.534 tick 100 00:04:08.534 tick 250 00:04:08.534 tick 100 00:04:08.534 tick 100 00:04:08.534 tick 100 00:04:08.534 tick 250 00:04:08.534 tick 500 00:04:08.534 tick 100 00:04:08.534 tick 100 00:04:08.534 tick 250 00:04:08.534 tick 100 00:04:08.534 tick 100 00:04:08.534 test_end 00:04:08.534 00:04:08.534 real 0m1.323s 00:04:08.534 user 0m1.236s 00:04:08.534 sys 0m0.082s 00:04:08.534 00:07:59 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.534 00:07:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:08.534 ************************************ 00:04:08.534 END TEST event_reactor 00:04:08.534 ************************************ 00:04:08.534 00:07:59 event -- common/autotest_common.sh@1142 -- # return 0 00:04:08.534 00:07:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:08.534 00:07:59 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:08.534 00:07:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.534 00:07:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:08.534 ************************************ 00:04:08.534 START TEST event_reactor_perf 00:04:08.534 ************************************ 00:04:08.534 00:07:59 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:08.534 [2024-07-16 00:07:59.893704] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:08.534 [2024-07-16 00:07:59.893772] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163589 ] 00:04:08.534 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.534 [2024-07-16 00:07:59.956541] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.534 [2024-07-16 00:08:00.084196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.915 test_start 00:04:09.915 test_end 00:04:09.915 Performance: 356436 events per second 00:04:09.915 00:04:09.915 real 0m1.328s 00:04:09.915 user 0m1.240s 00:04:09.915 sys 0m0.083s 00:04:09.915 00:08:01 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.915 00:08:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:09.915 ************************************ 00:04:09.915 END TEST event_reactor_perf 00:04:09.915 ************************************ 00:04:09.915 00:08:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:09.915 00:08:01 event -- event/event.sh@49 -- # uname -s 00:04:09.915 00:08:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:09.915 00:08:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:09.915 00:08:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.915 00:08:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.915 00:08:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.915 ************************************ 00:04:09.915 START TEST event_scheduler 00:04:09.915 ************************************ 00:04:09.915 00:08:01 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:09.915 * Looking for test storage... 00:04:09.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:09.915 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:09.915 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2163769 00:04:09.915 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:09.915 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.915 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2163769 00:04:09.915 00:08:01 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2163769 ']' 00:04:09.915 00:08:01 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.915 00:08:01 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:09.916 00:08:01 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.916 00:08:01 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:09.916 00:08:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:09.916 [2024-07-16 00:08:01.348826] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:09.916 [2024-07-16 00:08:01.348924] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163769 ] 00:04:09.916 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.916 [2024-07-16 00:08:01.406705] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:09.916 [2024-07-16 00:08:01.515437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.916 [2024-07-16 00:08:01.515527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:09.916 [2024-07-16 00:08:01.515492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.916 [2024-07-16 00:08:01.515532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:10.174 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 [2024-07-16 00:08:01.552370] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:10.174 [2024-07-16 00:08:01.552396] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:10.174 [2024-07-16 00:08:01.552412] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:10.174 [2024-07-16 00:08:01.552423] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:10.174 [2024-07-16 00:08:01.552432] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 [2024-07-16 00:08:01.645348] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 ************************************ 00:04:10.174 START TEST scheduler_create_thread 00:04:10.174 ************************************ 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 2 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 3 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 4 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 5 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 6 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.174 7 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:10.174 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 8 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 9 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 10 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.175 00:08:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.742 00:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.742 00:04:10.742 real 0m0.588s 00:04:10.742 user 0m0.010s 00:04:10.742 sys 0m0.003s 00:04:10.742 00:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.742 00:08:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.742 ************************************ 00:04:10.742 END TEST scheduler_create_thread 00:04:10.742 ************************************ 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:10.742 00:08:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:10.742 00:08:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2163769 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2163769 ']' 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2163769 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2163769 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2163769' 00:04:10.742 killing process with pid 2163769 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2163769 00:04:10.742 00:08:02 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2163769 00:04:11.310 [2024-07-16 00:08:02.737488] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:11.610 00:04:11.610 real 0m1.730s 00:04:11.610 user 0m2.140s 00:04:11.610 sys 0m0.314s 00:04:11.610 00:08:02 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.610 00:08:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.610 ************************************ 00:04:11.610 END TEST event_scheduler 00:04:11.610 ************************************ 00:04:11.610 00:08:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:11.610 00:08:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:11.610 00:08:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:11.610 00:08:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.610 00:08:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.610 00:08:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.610 ************************************ 00:04:11.610 START TEST app_repeat 00:04:11.610 ************************************ 00:04:11.610 00:08:03 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2164001 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2164001' 00:04:11.610 Process app_repeat pid: 2164001 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:11.610 spdk_app_start Round 0 00:04:11.610 00:08:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2164001 /var/tmp/spdk-nbd.sock 00:04:11.610 00:08:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2164001 ']' 00:04:11.610 00:08:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:11.610 00:08:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.610 00:08:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:11.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:11.610 00:08:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.610 00:08:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:11.610 [2024-07-16 00:08:03.067487] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:11.610 [2024-07-16 00:08:03.067553] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164001 ] 00:04:11.610 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.610 [2024-07-16 00:08:03.132099] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.866 [2024-07-16 00:08:03.250334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.866 [2024-07-16 00:08:03.250338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.866 00:08:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.866 00:08:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:11.866 00:08:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.123 Malloc0 00:04:12.123 00:08:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.380 Malloc1 00:04:12.380 00:08:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.380 00:08:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:12.637 /dev/nbd0 00:04:12.637 00:08:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.637 00:08:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.637 00:08:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:12.637 00:08:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:12.637 00:08:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:12.637 00:08:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:12.637 00:08:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:12.637 00:08:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:12.637 00:08:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:12.638 00:08:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:12.638 00:08:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.638 1+0 records in 00:04:12.638 1+0 records out 00:04:12.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000136047 s, 30.1 MB/s 00:04:12.638 00:08:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.638 00:08:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:12.638 00:08:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.638 00:08:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:12.638 00:08:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:12.638 00:08:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.638 00:08:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.638 00:08:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:12.895 /dev/nbd1 00:04:12.895 00:08:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:12.895 00:08:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.895 1+0 records in 00:04:12.895 1+0 records out 00:04:12.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200655 s, 20.4 MB/s 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:12.895 00:08:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:12.895 00:08:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.895 00:08:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.895 00:08:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.895 00:08:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.895 00:08:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.152 { 00:04:13.152 "nbd_device": "/dev/nbd0", 00:04:13.152 "bdev_name": "Malloc0" 00:04:13.152 }, 00:04:13.152 { 00:04:13.152 "nbd_device": "/dev/nbd1", 00:04:13.152 "bdev_name": "Malloc1" 00:04:13.152 } 00:04:13.152 ]' 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.152 { 00:04:13.152 "nbd_device": "/dev/nbd0", 00:04:13.152 "bdev_name": "Malloc0" 00:04:13.152 }, 00:04:13.152 { 00:04:13.152 "nbd_device": "/dev/nbd1", 00:04:13.152 "bdev_name": "Malloc1" 00:04:13.152 } 00:04:13.152 ]' 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.152 /dev/nbd1' 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.152 /dev/nbd1' 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.152 256+0 records in 00:04:13.152 256+0 records out 00:04:13.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513427 s, 204 MB/s 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.152 00:08:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.410 256+0 records in 00:04:13.410 256+0 records out 00:04:13.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238273 s, 44.0 MB/s 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.410 256+0 records in 00:04:13.410 256+0 records out 00:04:13.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023374 s, 44.9 MB/s 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.410 00:08:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.669 00:08:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.927 00:08:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.185 00:08:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.185 00:08:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.444 00:08:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:14.704 [2024-07-16 00:08:06.205526] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.985 [2024-07-16 00:08:06.320396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.985 [2024-07-16 00:08:06.320396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.985 [2024-07-16 00:08:06.380656] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.985 [2024-07-16 00:08:06.380729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.539 00:08:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.539 00:08:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:17.539 spdk_app_start Round 1 00:04:17.539 00:08:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2164001 /var/tmp/spdk-nbd.sock 00:04:17.539 00:08:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2164001 ']' 00:04:17.539 00:08:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.539 00:08:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.539 00:08:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.539 00:08:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.539 00:08:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.799 00:08:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:17.799 00:08:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:17.799 00:08:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.060 Malloc0 00:04:18.060 00:08:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.060 Malloc1 00:04:18.318 00:08:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.318 00:08:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.576 /dev/nbd0 00:04:18.576 00:08:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.576 00:08:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.576 1+0 records in 00:04:18.576 1+0 records out 00:04:18.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021731 s, 18.8 MB/s 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:18.576 00:08:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:18.576 00:08:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.576 00:08:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.576 00:08:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.834 /dev/nbd1 00:04:18.834 00:08:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.834 00:08:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.834 1+0 records in 00:04:18.834 1+0 records out 00:04:18.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238631 s, 17.2 MB/s 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:18.834 00:08:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:18.834 00:08:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.834 00:08:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.834 00:08:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.835 00:08:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.835 00:08:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.092 { 00:04:19.092 "nbd_device": "/dev/nbd0", 00:04:19.092 "bdev_name": "Malloc0" 00:04:19.092 }, 00:04:19.092 { 00:04:19.092 "nbd_device": "/dev/nbd1", 00:04:19.092 "bdev_name": "Malloc1" 00:04:19.092 } 00:04:19.092 ]' 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.092 { 00:04:19.092 "nbd_device": "/dev/nbd0", 00:04:19.092 "bdev_name": "Malloc0" 00:04:19.092 }, 00:04:19.092 { 00:04:19.092 "nbd_device": "/dev/nbd1", 00:04:19.092 "bdev_name": "Malloc1" 00:04:19.092 } 00:04:19.092 ]' 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.092 /dev/nbd1' 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.092 /dev/nbd1' 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.092 00:08:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.092 256+0 records in 00:04:19.092 256+0 records out 00:04:19.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503757 s, 208 MB/s 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.093 256+0 records in 00:04:19.093 256+0 records out 00:04:19.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239227 s, 43.8 MB/s 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.093 256+0 records in 00:04:19.093 256+0 records out 00:04:19.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237539 s, 44.1 MB/s 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.093 00:08:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.351 00:08:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.609 00:08:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.867 00:08:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.867 00:08:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.125 00:08:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:20.383 [2024-07-16 00:08:11.973204] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.641 [2024-07-16 00:08:12.087841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.641 [2024-07-16 00:08:12.087845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.641 [2024-07-16 00:08:12.148888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.641 [2024-07-16 00:08:12.148987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.170 00:08:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.170 00:08:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:23.170 spdk_app_start Round 2 00:04:23.170 00:08:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2164001 /var/tmp/spdk-nbd.sock 00:04:23.170 00:08:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2164001 ']' 00:04:23.170 00:08:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.170 00:08:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.170 00:08:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.170 00:08:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.170 00:08:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.428 00:08:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.428 00:08:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:23.428 00:08:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.686 Malloc0 00:04:23.686 00:08:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.946 Malloc1 00:04:23.946 00:08:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.946 00:08:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.204 /dev/nbd0 00:04:24.204 00:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.204 00:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:24.204 00:08:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.204 1+0 records in 00:04:24.204 1+0 records out 00:04:24.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154227 s, 26.6 MB/s 00:04:24.205 00:08:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.205 00:08:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:24.205 00:08:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.205 00:08:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:24.205 00:08:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:24.205 00:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.205 00:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.205 00:08:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.463 /dev/nbd1 00:04:24.463 00:08:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.463 00:08:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.463 1+0 records in 00:04:24.463 1+0 records out 00:04:24.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226592 s, 18.1 MB/s 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:24.463 00:08:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:24.463 00:08:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.463 00:08:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.463 00:08:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.463 00:08:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.463 00:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.721 { 00:04:24.721 "nbd_device": "/dev/nbd0", 00:04:24.721 "bdev_name": "Malloc0" 00:04:24.721 }, 00:04:24.721 { 00:04:24.721 "nbd_device": "/dev/nbd1", 00:04:24.721 "bdev_name": "Malloc1" 00:04:24.721 } 00:04:24.721 ]' 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.721 { 00:04:24.721 "nbd_device": "/dev/nbd0", 00:04:24.721 "bdev_name": "Malloc0" 00:04:24.721 }, 00:04:24.721 { 00:04:24.721 "nbd_device": "/dev/nbd1", 00:04:24.721 "bdev_name": "Malloc1" 00:04:24.721 } 00:04:24.721 ]' 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.721 /dev/nbd1' 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.721 /dev/nbd1' 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.721 00:08:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.979 256+0 records in 00:04:24.979 256+0 records out 00:04:24.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507619 s, 207 MB/s 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.979 256+0 records in 00:04:24.979 256+0 records out 00:04:24.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240135 s, 43.7 MB/s 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.979 256+0 records in 00:04:24.979 256+0 records out 00:04:24.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231386 s, 45.3 MB/s 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.979 00:08:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.237 00:08:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.494 00:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.751 00:08:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.751 00:08:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.009 00:08:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.269 [2024-07-16 00:08:17.776811] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.529 [2024-07-16 00:08:17.891658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.529 [2024-07-16 00:08:17.891660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.529 [2024-07-16 00:08:17.953250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.529 [2024-07-16 00:08:17.953331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.062 00:08:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2164001 /var/tmp/spdk-nbd.sock 00:04:29.062 00:08:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2164001 ']' 00:04:29.062 00:08:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.062 00:08:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.062 00:08:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.062 00:08:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.062 00:08:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:29.321 00:08:20 event.app_repeat -- event/event.sh@39 -- # killprocess 2164001 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2164001 ']' 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2164001 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2164001 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2164001' 00:04:29.321 killing process with pid 2164001 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2164001 00:04:29.321 00:08:20 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2164001 00:04:29.580 spdk_app_start is called in Round 0. 00:04:29.580 Shutdown signal received, stop current app iteration 00:04:29.580 Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 reinitialization... 00:04:29.580 spdk_app_start is called in Round 1. 00:04:29.580 Shutdown signal received, stop current app iteration 00:04:29.580 Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 reinitialization... 00:04:29.580 spdk_app_start is called in Round 2. 00:04:29.580 Shutdown signal received, stop current app iteration 00:04:29.580 Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 reinitialization... 00:04:29.580 spdk_app_start is called in Round 3. 00:04:29.580 Shutdown signal received, stop current app iteration 00:04:29.580 00:08:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:29.580 00:08:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:29.580 00:04:29.580 real 0m17.996s 00:04:29.580 user 0m38.889s 00:04:29.580 sys 0m3.244s 00:04:29.580 00:08:21 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.580 00:08:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.580 ************************************ 00:04:29.580 END TEST app_repeat 00:04:29.580 ************************************ 00:04:29.580 00:08:21 event -- common/autotest_common.sh@1142 -- # return 0 00:04:29.580 00:08:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:29.580 00:08:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:29.580 00:08:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.580 00:08:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.580 00:08:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.580 ************************************ 00:04:29.580 START TEST cpu_locks 00:04:29.580 ************************************ 00:04:29.580 00:08:21 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:29.580 * Looking for test storage... 00:04:29.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:29.580 00:08:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:29.580 00:08:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:29.580 00:08:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:29.580 00:08:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:29.580 00:08:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.580 00:08:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.580 00:08:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.580 ************************************ 00:04:29.580 START TEST default_locks 00:04:29.580 ************************************ 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2166424 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2166424 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2166424 ']' 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.580 00:08:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.839 [2024-07-16 00:08:21.213043] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:29.839 [2024-07-16 00:08:21.213141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166424 ] 00:04:29.839 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.839 [2024-07-16 00:08:21.275685] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.839 [2024-07-16 00:08:21.390984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.873 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.873 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:30.873 00:08:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2166424 00:04:30.873 00:08:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2166424 00:04:30.873 00:08:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.132 lslocks: write error 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2166424 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2166424 ']' 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2166424 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2166424 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2166424' 00:04:31.132 killing process with pid 2166424 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2166424 00:04:31.132 00:08:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2166424 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2166424 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2166424 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2166424 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2166424 ']' 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2166424) - No such process 00:04:31.695 ERROR: process (pid: 2166424) is no longer running 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.695 00:04:31.695 real 0m1.966s 00:04:31.695 user 0m2.094s 00:04:31.695 sys 0m0.597s 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.695 00:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.695 ************************************ 00:04:31.695 END TEST default_locks 00:04:31.695 ************************************ 00:04:31.695 00:08:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:31.695 00:08:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:31.695 00:08:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.695 00:08:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.695 00:08:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.695 ************************************ 00:04:31.695 START TEST default_locks_via_rpc 00:04:31.695 ************************************ 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2166616 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2166616 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2166616 ']' 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.695 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.695 [2024-07-16 00:08:23.222381] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:31.695 [2024-07-16 00:08:23.222467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166616 ] 00:04:31.695 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.695 [2024-07-16 00:08:23.278938] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.954 [2024-07-16 00:08:23.388767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2166616 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2166616 00:04:32.212 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2166616 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2166616 ']' 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2166616 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2166616 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2166616' 00:04:32.470 killing process with pid 2166616 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2166616 00:04:32.470 00:08:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2166616 00:04:33.038 00:04:33.038 real 0m1.265s 00:04:33.038 user 0m1.197s 00:04:33.038 sys 0m0.540s 00:04:33.038 00:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.038 00:08:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.038 ************************************ 00:04:33.038 END TEST default_locks_via_rpc 00:04:33.038 ************************************ 00:04:33.038 00:08:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:33.038 00:08:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:33.038 00:08:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.038 00:08:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.038 00:08:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.038 ************************************ 00:04:33.038 START TEST non_locking_app_on_locked_coremask 00:04:33.038 ************************************ 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2166812 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2166812 /var/tmp/spdk.sock 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2166812 ']' 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.038 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.039 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.039 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.039 [2024-07-16 00:08:24.537548] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:33.039 [2024-07-16 00:08:24.537658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166812 ] 00:04:33.039 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.039 [2024-07-16 00:08:24.596014] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.298 [2024-07-16 00:08:24.705892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.556 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2166898 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2166898 /var/tmp/spdk2.sock 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2166898 ']' 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.557 00:08:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.557 [2024-07-16 00:08:25.014002] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:33.557 [2024-07-16 00:08:25.014096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166898 ] 00:04:33.557 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.557 [2024-07-16 00:08:25.103972] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.557 [2024-07-16 00:08:25.104004] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.815 [2024-07-16 00:08:25.344251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.379 00:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.379 00:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:34.379 00:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2166812 00:04:34.379 00:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2166812 00:04:34.379 00:08:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.947 lslocks: write error 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2166812 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2166812 ']' 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2166812 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2166812 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2166812' 00:04:34.947 killing process with pid 2166812 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2166812 00:04:34.947 00:08:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2166812 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2166898 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2166898 ']' 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2166898 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2166898 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2166898' 00:04:35.883 killing process with pid 2166898 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2166898 00:04:35.883 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2166898 00:04:36.451 00:04:36.451 real 0m3.348s 00:04:36.451 user 0m3.464s 00:04:36.451 sys 0m1.060s 00:04:36.451 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.451 00:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.451 ************************************ 00:04:36.451 END TEST non_locking_app_on_locked_coremask 00:04:36.451 ************************************ 00:04:36.451 00:08:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:36.451 00:08:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:36.451 00:08:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.451 00:08:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.451 00:08:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.451 ************************************ 00:04:36.451 START TEST locking_app_on_unlocked_coremask 00:04:36.451 ************************************ 00:04:36.451 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:36.451 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2167227 00:04:36.451 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:36.451 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2167227 /var/tmp/spdk.sock 00:04:36.451 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2167227 ']' 00:04:36.451 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.452 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.452 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.452 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.452 00:08:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.452 [2024-07-16 00:08:27.939833] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:36.452 [2024-07-16 00:08:27.939958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167227 ] 00:04:36.452 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.452 [2024-07-16 00:08:28.002758] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:36.452 [2024-07-16 00:08:28.002795] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.711 [2024-07-16 00:08:28.120440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2167348 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2167348 /var/tmp/spdk2.sock 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2167348 ']' 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.277 00:08:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.536 [2024-07-16 00:08:28.920825] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:37.536 [2024-07-16 00:08:28.920932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167348 ] 00:04:37.537 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.537 [2024-07-16 00:08:29.015506] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.796 [2024-07-16 00:08:29.250553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.363 00:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.363 00:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:38.363 00:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2167348 00:04:38.363 00:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2167348 00:04:38.363 00:08:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.930 lslocks: write error 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2167227 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2167227 ']' 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2167227 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2167227 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2167227' 00:04:38.930 killing process with pid 2167227 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2167227 00:04:38.930 00:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2167227 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2167348 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2167348 ']' 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2167348 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2167348 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2167348' 00:04:39.870 killing process with pid 2167348 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2167348 00:04:39.870 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2167348 00:04:40.439 00:04:40.439 real 0m3.960s 00:04:40.440 user 0m4.288s 00:04:40.440 sys 0m1.123s 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.440 ************************************ 00:04:40.440 END TEST locking_app_on_unlocked_coremask 00:04:40.440 ************************************ 00:04:40.440 00:08:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:40.440 00:08:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:40.440 00:08:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.440 00:08:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.440 00:08:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.440 ************************************ 00:04:40.440 START TEST locking_app_on_locked_coremask 00:04:40.440 ************************************ 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2167773 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2167773 /var/tmp/spdk.sock 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2167773 ']' 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.440 00:08:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.440 [2024-07-16 00:08:31.941588] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:40.440 [2024-07-16 00:08:31.941682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167773 ] 00:04:40.440 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.440 [2024-07-16 00:08:31.999241] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.699 [2024-07-16 00:08:32.105951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2167794 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2167794 /var/tmp/spdk2.sock 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2167794 /var/tmp/spdk2.sock 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2167794 /var/tmp/spdk2.sock 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2167794 ']' 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.958 00:08:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.958 [2024-07-16 00:08:32.419260] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:40.958 [2024-07-16 00:08:32.419349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167794 ] 00:04:40.958 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.958 [2024-07-16 00:08:32.517060] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2167773 has claimed it. 00:04:40.958 [2024-07-16 00:08:32.517109] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:41.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2167794) - No such process 00:04:41.527 ERROR: process (pid: 2167794) is no longer running 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2167773 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2167773 00:04:41.527 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.092 lslocks: write error 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2167773 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2167773 ']' 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2167773 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2167773 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2167773' 00:04:42.092 killing process with pid 2167773 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2167773 00:04:42.092 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2167773 00:04:42.350 00:04:42.350 real 0m2.001s 00:04:42.350 user 0m2.154s 00:04:42.350 sys 0m0.612s 00:04:42.350 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.350 00:08:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.350 ************************************ 00:04:42.350 END TEST locking_app_on_locked_coremask 00:04:42.350 ************************************ 00:04:42.350 00:08:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:42.350 00:08:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:42.350 00:08:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.350 00:08:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.350 00:08:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.350 ************************************ 00:04:42.350 START TEST locking_overlapped_coremask 00:04:42.350 ************************************ 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2168069 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2168069 /var/tmp/spdk.sock 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2168069 ']' 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.350 00:08:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.608 [2024-07-16 00:08:33.997672] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:42.608 [2024-07-16 00:08:33.997759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168069 ] 00:04:42.608 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.608 [2024-07-16 00:08:34.060294] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.608 [2024-07-16 00:08:34.177234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.608 [2024-07-16 00:08:34.177290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.608 [2024-07-16 00:08:34.177294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2168208 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2168208 /var/tmp/spdk2.sock 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2168208 /var/tmp/spdk2.sock 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2168208 /var/tmp/spdk2.sock 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2168208 ']' 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.542 00:08:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.542 [2024-07-16 00:08:34.982532] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:43.542 [2024-07-16 00:08:34.982618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168208 ] 00:04:43.542 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.542 [2024-07-16 00:08:35.070538] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2168069 has claimed it. 00:04:43.542 [2024-07-16 00:08:35.070611] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:44.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2168208) - No such process 00:04:44.133 ERROR: process (pid: 2168208) is no longer running 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2168069 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2168069 ']' 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2168069 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2168069 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2168069' 00:04:44.133 killing process with pid 2168069 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2168069 00:04:44.133 00:08:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2168069 00:04:44.712 00:04:44.712 real 0m2.210s 00:04:44.712 user 0m6.196s 00:04:44.712 sys 0m0.482s 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.712 ************************************ 00:04:44.712 END TEST locking_overlapped_coremask 00:04:44.712 ************************************ 00:04:44.712 00:08:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:44.712 00:08:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:44.712 00:08:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.712 00:08:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.712 00:08:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.712 ************************************ 00:04:44.712 START TEST locking_overlapped_coremask_via_rpc 00:04:44.712 ************************************ 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2168378 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2168378 /var/tmp/spdk.sock 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2168378 ']' 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.712 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.712 [2024-07-16 00:08:36.257420] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:44.712 [2024-07-16 00:08:36.257485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168378 ] 00:04:44.712 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.712 [2024-07-16 00:08:36.312909] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.712 [2024-07-16 00:08:36.312948] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:44.970 [2024-07-16 00:08:36.424213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.970 [2024-07-16 00:08:36.424271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.970 [2024-07-16 00:08:36.424274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2168396 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2168396 /var/tmp/spdk2.sock 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2168396 ']' 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.229 00:08:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 [2024-07-16 00:08:36.722571] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:45.229 [2024-07-16 00:08:36.722657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168396 ] 00:04:45.229 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.229 [2024-07-16 00:08:36.810377] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.229 [2024-07-16 00:08:36.810420] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.489 [2024-07-16 00:08:37.029134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.489 [2024-07-16 00:08:37.032970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:45.489 [2024-07-16 00:08:37.032973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.059 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.319 [2024-07-16 00:08:37.675976] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2168378 has claimed it. 00:04:46.319 request: 00:04:46.319 { 00:04:46.319 "method": "framework_enable_cpumask_locks", 00:04:46.319 "req_id": 1 00:04:46.319 } 00:04:46.319 Got JSON-RPC error response 00:04:46.319 response: 00:04:46.319 { 00:04:46.319 "code": -32603, 00:04:46.319 "message": "Failed to claim CPU core: 2" 00:04:46.319 } 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2168378 /var/tmp/spdk.sock 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2168378 ']' 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2168396 /var/tmp/spdk2.sock 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2168396 ']' 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.319 00:08:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:46.887 00:04:46.887 real 0m1.988s 00:04:46.887 user 0m1.030s 00:04:46.887 sys 0m0.192s 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.887 00:08:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.887 ************************************ 00:04:46.887 END TEST locking_overlapped_coremask_via_rpc 00:04:46.887 ************************************ 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:46.887 00:08:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:46.887 00:08:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2168378 ]] 00:04:46.887 00:08:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2168378 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2168378 ']' 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2168378 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2168378 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2168378' 00:04:46.887 killing process with pid 2168378 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2168378 00:04:46.887 00:08:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2168378 00:04:47.145 00:08:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2168396 ]] 00:04:47.145 00:08:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2168396 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2168396 ']' 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2168396 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2168396 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2168396' 00:04:47.145 killing process with pid 2168396 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2168396 00:04:47.145 00:08:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2168396 00:04:47.713 00:08:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:47.713 00:08:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:47.713 00:08:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2168378 ]] 00:04:47.713 00:08:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2168378 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2168378 ']' 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2168378 00:04:47.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2168378) - No such process 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2168378 is not found' 00:04:47.714 Process with pid 2168378 is not found 00:04:47.714 00:08:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2168396 ]] 00:04:47.714 00:08:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2168396 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2168396 ']' 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2168396 00:04:47.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2168396) - No such process 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2168396 is not found' 00:04:47.714 Process with pid 2168396 is not found 00:04:47.714 00:08:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:47.714 00:04:47.714 real 0m18.092s 00:04:47.714 user 0m31.447s 00:04:47.714 sys 0m5.506s 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.714 00:08:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.714 ************************************ 00:04:47.714 END TEST cpu_locks 00:04:47.714 ************************************ 00:04:47.714 00:08:39 event -- common/autotest_common.sh@1142 -- # return 0 00:04:47.714 00:04:47.714 real 0m42.114s 00:04:47.714 user 1m19.311s 00:04:47.714 sys 0m9.531s 00:04:47.714 00:08:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.714 00:08:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.714 ************************************ 00:04:47.714 END TEST event 00:04:47.714 ************************************ 00:04:47.714 00:08:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.714 00:08:39 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:47.714 00:08:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.714 00:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.714 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.714 ************************************ 00:04:47.714 START TEST thread 00:04:47.714 ************************************ 00:04:47.714 00:08:39 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:47.714 * Looking for test storage... 00:04:47.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:47.714 00:08:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:47.714 00:08:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:47.714 00:08:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.714 00:08:39 thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.973 ************************************ 00:04:47.973 START TEST thread_poller_perf 00:04:47.973 ************************************ 00:04:47.973 00:08:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:47.973 [2024-07-16 00:08:39.341146] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:47.973 [2024-07-16 00:08:39.341215] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168878 ] 00:04:47.973 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.973 [2024-07-16 00:08:39.403849] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.973 [2024-07-16 00:08:39.518704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.973 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:49.355 ====================================== 00:04:49.355 busy:2710257983 (cyc) 00:04:49.355 total_run_count: 293000 00:04:49.355 tsc_hz: 2700000000 (cyc) 00:04:49.355 ====================================== 00:04:49.355 poller_cost: 9250 (cyc), 3425 (nsec) 00:04:49.355 00:04:49.355 real 0m1.322s 00:04:49.355 user 0m1.238s 00:04:49.355 sys 0m0.079s 00:04:49.355 00:08:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.355 00:08:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.355 ************************************ 00:04:49.355 END TEST thread_poller_perf 00:04:49.355 ************************************ 00:04:49.355 00:08:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:49.355 00:08:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:49.355 00:08:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:49.355 00:08:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.355 00:08:40 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.355 ************************************ 00:04:49.355 START TEST thread_poller_perf 00:04:49.355 ************************************ 00:04:49.355 00:08:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:49.355 [2024-07-16 00:08:40.706269] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:49.355 [2024-07-16 00:08:40.706332] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169032 ] 00:04:49.355 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.355 [2024-07-16 00:08:40.768277] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.355 [2024-07-16 00:08:40.886595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.355 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:50.738 ====================================== 00:04:50.738 busy:2702707262 (cyc) 00:04:50.738 total_run_count: 3875000 00:04:50.739 tsc_hz: 2700000000 (cyc) 00:04:50.739 ====================================== 00:04:50.739 poller_cost: 697 (cyc), 258 (nsec) 00:04:50.739 00:04:50.739 real 0m1.319s 00:04:50.739 user 0m1.227s 00:04:50.739 sys 0m0.086s 00:04:50.739 00:08:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.739 00:08:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.739 ************************************ 00:04:50.739 END TEST thread_poller_perf 00:04:50.739 ************************************ 00:04:50.739 00:08:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:50.739 00:08:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:50.739 00:04:50.739 real 0m2.781s 00:04:50.739 user 0m2.519s 00:04:50.739 sys 0m0.261s 00:04:50.739 00:08:42 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.739 00:08:42 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.739 ************************************ 00:04:50.739 END TEST thread 00:04:50.739 ************************************ 00:04:50.739 00:08:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.739 00:08:42 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:50.739 00:08:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.739 00:08:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.739 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.739 ************************************ 00:04:50.739 START TEST accel 00:04:50.739 ************************************ 00:04:50.739 00:08:42 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:50.739 * Looking for test storage... 00:04:50.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:50.739 00:08:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:50.739 00:08:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:50.739 00:08:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.739 00:08:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2169224 00:04:50.739 00:08:42 accel -- accel/accel.sh@63 -- # waitforlisten 2169224 00:04:50.739 00:08:42 accel -- common/autotest_common.sh@829 -- # '[' -z 2169224 ']' 00:04:50.739 00:08:42 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:50.739 00:08:42 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.739 00:08:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:50.739 00:08:42 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.739 00:08:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.739 00:08:42 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.739 00:08:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.739 00:08:42 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.739 00:08:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.739 00:08:42 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.739 00:08:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.739 00:08:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.739 00:08:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:50.739 00:08:42 accel -- accel/accel.sh@41 -- # jq -r . 00:04:50.739 [2024-07-16 00:08:42.182939] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:50.739 [2024-07-16 00:08:42.183025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169224 ] 00:04:50.739 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.739 [2024-07-16 00:08:42.247659] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.997 [2024-07-16 00:08:42.359215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.256 00:08:42 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.256 00:08:42 accel -- common/autotest_common.sh@862 -- # return 0 00:04:51.256 00:08:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:51.256 00:08:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:51.256 00:08:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:51.256 00:08:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:51.256 00:08:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:51.257 00:08:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.257 00:08:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.257 00:08:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.257 00:08:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.257 00:08:42 accel -- accel/accel.sh@75 -- # killprocess 2169224 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@948 -- # '[' -z 2169224 ']' 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@952 -- # kill -0 2169224 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@953 -- # uname 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2169224 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2169224' 00:04:51.257 killing process with pid 2169224 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@967 -- # kill 2169224 00:04:51.257 00:08:42 accel -- common/autotest_common.sh@972 -- # wait 2169224 00:04:51.825 00:08:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:51.825 00:08:43 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:51.825 00:08:43 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:51.825 00:08:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.825 00:08:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 00:08:43 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:51.825 00:08:43 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:51.825 00:08:43 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.825 00:08:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 00:08:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:51.825 00:08:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:51.825 00:08:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:51.825 00:08:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.825 00:08:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 ************************************ 00:04:51.825 START TEST accel_missing_filename 00:04:51.825 ************************************ 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.825 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:51.825 00:08:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:51.825 [2024-07-16 00:08:43.282694] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:51.825 [2024-07-16 00:08:43.282763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169400 ] 00:04:51.825 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.825 [2024-07-16 00:08:43.345801] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.084 [2024-07-16 00:08:43.461593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.084 [2024-07-16 00:08:43.523557] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.084 [2024-07-16 00:08:43.607528] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:52.344 A filename is required. 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.344 00:04:52.344 real 0m0.467s 00:04:52.344 user 0m0.350s 00:04:52.344 sys 0m0.150s 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.344 00:08:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:52.344 ************************************ 00:04:52.344 END TEST accel_missing_filename 00:04:52.344 ************************************ 00:04:52.344 00:08:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.344 00:08:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.344 00:08:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:52.344 00:08:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.344 00:08:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.344 ************************************ 00:04:52.344 START TEST accel_compress_verify 00:04:52.344 ************************************ 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.344 00:08:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:52.344 00:08:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:52.344 [2024-07-16 00:08:43.800099] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:52.344 [2024-07-16 00:08:43.800164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169546 ] 00:04:52.344 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.344 [2024-07-16 00:08:43.863749] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.603 [2024-07-16 00:08:43.981846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.603 [2024-07-16 00:08:44.043465] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.603 [2024-07-16 00:08:44.128076] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:52.863 00:04:52.863 Compression does not support the verify option, aborting. 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.863 00:04:52.863 real 0m0.473s 00:04:52.863 user 0m0.361s 00:04:52.863 sys 0m0.145s 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.863 00:08:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:52.863 ************************************ 00:04:52.863 END TEST accel_compress_verify 00:04:52.863 ************************************ 00:04:52.863 00:08:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.863 00:08:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:52.863 00:08:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:52.863 00:08:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.863 00:08:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.863 ************************************ 00:04:52.863 START TEST accel_wrong_workload 00:04:52.863 ************************************ 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.863 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:52.863 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:52.863 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:52.863 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.864 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.864 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:52.864 00:08:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:52.864 Unsupported workload type: foobar 00:04:52.864 [2024-07-16 00:08:44.315774] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:52.864 accel_perf options: 00:04:52.864 [-h help message] 00:04:52.864 [-q queue depth per core] 00:04:52.864 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:52.864 [-T number of threads per core 00:04:52.864 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:52.864 [-t time in seconds] 00:04:52.864 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:52.864 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:52.864 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:52.864 [-l for compress/decompress workloads, name of uncompressed input file 00:04:52.864 [-S for crc32c workload, use this seed value (default 0) 00:04:52.864 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:52.864 [-f for fill workload, use this BYTE value (default 255) 00:04:52.864 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:52.864 [-y verify result if this switch is on] 00:04:52.864 [-a tasks to allocate per core (default: same value as -q)] 00:04:52.864 Can be used to spread operations across a wider range of memory. 00:04:52.864 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:52.864 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.864 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.864 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.864 00:04:52.864 real 0m0.020s 00:04:52.864 user 0m0.013s 00:04:52.864 sys 0m0.007s 00:04:52.864 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.864 00:08:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 ************************************ 00:04:52.864 END TEST accel_wrong_workload 00:04:52.864 ************************************ 00:04:52.864 Error: writing output failed: Broken pipe 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.864 00:08:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 ************************************ 00:04:52.864 START TEST accel_negative_buffers 00:04:52.864 ************************************ 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:52.864 00:08:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:52.864 -x option must be non-negative. 00:04:52.864 [2024-07-16 00:08:44.387398] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:52.864 accel_perf options: 00:04:52.864 [-h help message] 00:04:52.864 [-q queue depth per core] 00:04:52.864 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:52.864 [-T number of threads per core 00:04:52.864 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:52.864 [-t time in seconds] 00:04:52.864 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:52.864 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:52.864 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:52.864 [-l for compress/decompress workloads, name of uncompressed input file 00:04:52.864 [-S for crc32c workload, use this seed value (default 0) 00:04:52.864 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:52.864 [-f for fill workload, use this BYTE value (default 255) 00:04:52.864 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:52.864 [-y verify result if this switch is on] 00:04:52.864 [-a tasks to allocate per core (default: same value as -q)] 00:04:52.864 Can be used to spread operations across a wider range of memory. 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.864 00:04:52.864 real 0m0.023s 00:04:52.864 user 0m0.011s 00:04:52.864 sys 0m0.012s 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.864 00:08:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 ************************************ 00:04:52.864 END TEST accel_negative_buffers 00:04:52.864 ************************************ 00:04:52.864 Error: writing output failed: Broken pipe 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.864 00:08:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.864 00:08:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 ************************************ 00:04:52.864 START TEST accel_crc32c 00:04:52.864 ************************************ 00:04:52.864 00:08:44 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:52.864 00:08:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:52.864 [2024-07-16 00:08:44.447488] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:52.864 [2024-07-16 00:08:44.447549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169613 ] 00:04:53.124 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.124 [2024-07-16 00:08:44.513724] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.124 [2024-07-16 00:08:44.627946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.124 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.125 00:08:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.505 00:08:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.505 00:04:54.505 real 0m1.463s 00:04:54.505 user 0m1.316s 00:04:54.505 sys 0m0.150s 00:04:54.505 00:08:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.505 00:08:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:54.505 ************************************ 00:04:54.505 END TEST accel_crc32c 00:04:54.505 ************************************ 00:04:54.505 00:08:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.505 00:08:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:54.505 00:08:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:54.505 00:08:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.505 00:08:45 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.505 ************************************ 00:04:54.506 START TEST accel_crc32c_C2 00:04:54.506 ************************************ 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:54.506 00:08:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:54.506 [2024-07-16 00:08:45.954280] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:54.506 [2024-07-16 00:08:45.954346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169809 ] 00:04:54.506 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.506 [2024-07-16 00:08:46.016713] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.766 [2024-07-16 00:08:46.135332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.766 00:08:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.151 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.151 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.151 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.151 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.151 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.152 00:04:56.152 real 0m1.476s 00:04:56.152 user 0m1.332s 00:04:56.152 sys 0m0.146s 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.152 00:08:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:56.152 ************************************ 00:04:56.152 END TEST accel_crc32c_C2 00:04:56.152 ************************************ 00:04:56.152 00:08:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:56.152 00:08:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:56.152 00:08:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:56.152 00:08:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.152 00:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.152 ************************************ 00:04:56.152 START TEST accel_copy 00:04:56.152 ************************************ 00:04:56.152 00:08:47 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:56.152 [2024-07-16 00:08:47.484681] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:56.152 [2024-07-16 00:08:47.484744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170046 ] 00:04:56.152 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.152 [2024-07-16 00:08:47.546578] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.152 [2024-07-16 00:08:47.663425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.152 00:08:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:57.531 00:08:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.531 00:04:57.531 real 0m1.479s 00:04:57.531 user 0m1.337s 00:04:57.531 sys 0m0.144s 00:04:57.532 00:08:48 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.532 00:08:48 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:57.532 ************************************ 00:04:57.532 END TEST accel_copy 00:04:57.532 ************************************ 00:04:57.532 00:08:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.532 00:08:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.532 00:08:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:57.532 00:08:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.532 00:08:48 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.532 ************************************ 00:04:57.532 START TEST accel_fill 00:04:57.532 ************************************ 00:04:57.532 00:08:48 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:57.532 00:08:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:57.532 [2024-07-16 00:08:49.002778] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:57.532 [2024-07-16 00:08:49.002842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170199 ] 00:04:57.532 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.532 [2024-07-16 00:08:49.063630] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.791 [2024-07-16 00:08:49.178728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.791 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.791 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.791 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.792 00:08:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:59.213 00:08:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.213 00:04:59.213 real 0m1.458s 00:04:59.213 user 0m1.311s 00:04:59.213 sys 0m0.149s 00:04:59.213 00:08:50 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.213 00:08:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:59.213 ************************************ 00:04:59.213 END TEST accel_fill 00:04:59.213 ************************************ 00:04:59.213 00:08:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:59.213 00:08:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:59.213 00:08:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:59.213 00:08:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.213 00:08:50 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.213 ************************************ 00:04:59.213 START TEST accel_copy_crc32c 00:04:59.213 ************************************ 00:04:59.213 00:08:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:04:59.213 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:59.214 [2024-07-16 00:08:50.509788] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:04:59.214 [2024-07-16 00:08:50.509850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170413 ] 00:04:59.214 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.214 [2024-07-16 00:08:50.571850] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.214 [2024-07-16 00:08:50.690150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.214 00:08:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.594 00:05:00.594 real 0m1.479s 00:05:00.594 user 0m1.344s 00:05:00.594 sys 0m0.137s 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.594 00:08:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:00.594 ************************************ 00:05:00.594 END TEST accel_copy_crc32c 00:05:00.594 ************************************ 00:05:00.594 00:08:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:00.594 00:08:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:00.594 00:08:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:00.594 00:08:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.594 00:08:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.594 ************************************ 00:05:00.594 START TEST accel_copy_crc32c_C2 00:05:00.594 ************************************ 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:00.594 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:00.594 [2024-07-16 00:08:52.032045] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:00.594 [2024-07-16 00:08:52.032102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170636 ] 00:05:00.594 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.594 [2024-07-16 00:08:52.093463] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.853 [2024-07-16 00:08:52.212121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.853 00:08:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.231 00:05:02.231 real 0m1.472s 00:05:02.231 user 0m1.332s 00:05:02.231 sys 0m0.143s 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.231 00:08:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:02.231 ************************************ 00:05:02.231 END TEST accel_copy_crc32c_C2 00:05:02.231 ************************************ 00:05:02.231 00:08:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:02.231 00:08:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:02.231 00:08:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:02.231 00:08:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.231 00:08:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.231 ************************************ 00:05:02.231 START TEST accel_dualcast 00:05:02.231 ************************************ 00:05:02.231 00:08:53 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:02.231 00:08:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:02.231 [2024-07-16 00:08:53.547825] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:02.232 [2024-07-16 00:08:53.547897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170793 ] 00:05:02.232 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.232 [2024-07-16 00:08:53.608401] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.232 [2024-07-16 00:08:53.721445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.232 00:08:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:03.609 00:08:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.609 00:05:03.609 real 0m1.453s 00:05:03.609 user 0m1.319s 00:05:03.609 sys 0m0.135s 00:05:03.609 00:08:54 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.609 00:08:54 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:03.609 ************************************ 00:05:03.609 END TEST accel_dualcast 00:05:03.609 ************************************ 00:05:03.609 00:08:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.609 00:08:55 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:03.609 00:08:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:03.609 00:08:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.609 00:08:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.609 ************************************ 00:05:03.609 START TEST accel_compare 00:05:03.609 ************************************ 00:05:03.609 00:08:55 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:03.609 00:08:55 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:03.609 [2024-07-16 00:08:55.043055] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:03.609 [2024-07-16 00:08:55.043126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170973 ] 00:05:03.609 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.609 [2024-07-16 00:08:55.105224] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.883 [2024-07-16 00:08:55.222781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.883 00:08:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:05.261 00:08:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.261 00:05:05.261 real 0m1.473s 00:05:05.261 user 0m1.329s 00:05:05.261 sys 0m0.146s 00:05:05.261 00:08:56 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.261 00:08:56 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:05.261 ************************************ 00:05:05.261 END TEST accel_compare 00:05:05.261 ************************************ 00:05:05.261 00:08:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:05.261 00:08:56 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:05.261 00:08:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:05.261 00:08:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.261 00:08:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.261 ************************************ 00:05:05.261 START TEST accel_xor 00:05:05.261 ************************************ 00:05:05.261 00:08:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:05.261 [2024-07-16 00:08:56.568805] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:05.261 [2024-07-16 00:08:56.568871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171228 ] 00:05:05.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.261 [2024-07-16 00:08:56.632588] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.261 [2024-07-16 00:08:56.749785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.261 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.262 00:08:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.639 00:05:06.639 real 0m1.487s 00:05:06.639 user 0m1.336s 00:05:06.639 sys 0m0.153s 00:05:06.639 00:08:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.639 00:08:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:06.639 ************************************ 00:05:06.639 END TEST accel_xor 00:05:06.639 ************************************ 00:05:06.639 00:08:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.639 00:08:58 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:06.639 00:08:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:06.639 00:08:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.639 00:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.639 ************************************ 00:05:06.639 START TEST accel_xor 00:05:06.639 ************************************ 00:05:06.639 00:08:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:06.639 00:08:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:06.640 00:08:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:06.640 [2024-07-16 00:08:58.100637] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:06.640 [2024-07-16 00:08:58.100701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171381 ] 00:05:06.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.640 [2024-07-16 00:08:58.166321] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.898 [2024-07-16 00:08:58.287153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.898 00:08:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:08.274 00:08:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.274 00:05:08.274 real 0m1.470s 00:05:08.274 user 0m1.337s 00:05:08.274 sys 0m0.135s 00:05:08.274 00:08:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.274 00:08:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:08.274 ************************************ 00:05:08.274 END TEST accel_xor 00:05:08.274 ************************************ 00:05:08.274 00:08:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:08.274 00:08:59 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:08.274 00:08:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:08.274 00:08:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.274 00:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.274 ************************************ 00:05:08.274 START TEST accel_dif_verify 00:05:08.274 ************************************ 00:05:08.275 00:08:59 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:08.275 [2024-07-16 00:08:59.615370] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:08.275 [2024-07-16 00:08:59.615436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171601 ] 00:05:08.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.275 [2024-07-16 00:08:59.682177] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.275 [2024-07-16 00:08:59.805212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.275 00:08:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:09.652 00:09:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.652 00:05:09.652 real 0m1.487s 00:05:09.652 user 0m1.339s 00:05:09.652 sys 0m0.151s 00:05:09.652 00:09:01 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.652 00:09:01 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:09.652 ************************************ 00:05:09.652 END TEST accel_dif_verify 00:05:09.652 ************************************ 00:05:09.652 00:09:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:09.652 00:09:01 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:09.652 00:09:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:09.652 00:09:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.652 00:09:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.652 ************************************ 00:05:09.652 START TEST accel_dif_generate 00:05:09.652 ************************************ 00:05:09.652 00:09:01 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:09.652 00:09:01 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:09.653 [2024-07-16 00:09:01.155856] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:09.653 [2024-07-16 00:09:01.155953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171887 ] 00:05:09.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.653 [2024-07-16 00:09:01.220660] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.913 [2024-07-16 00:09:01.345553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 00:09:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:11.312 00:09:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.312 00:05:11.312 real 0m1.496s 00:05:11.313 user 0m1.352s 00:05:11.313 sys 0m0.146s 00:05:11.313 00:09:02 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.313 00:09:02 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:11.313 ************************************ 00:05:11.313 END TEST accel_dif_generate 00:05:11.313 ************************************ 00:05:11.313 00:09:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.313 00:09:02 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:11.313 00:09:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:11.313 00:09:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.313 00:09:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.313 ************************************ 00:05:11.313 START TEST accel_dif_generate_copy 00:05:11.313 ************************************ 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:11.313 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:11.313 [2024-07-16 00:09:02.696073] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:11.313 [2024-07-16 00:09:02.696138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172086 ] 00:05:11.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.313 [2024-07-16 00:09:02.761383] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.313 [2024-07-16 00:09:02.888098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.573 00:09:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.959 00:05:12.959 real 0m1.487s 00:05:12.959 user 0m1.341s 00:05:12.959 sys 0m0.147s 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.959 00:09:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:12.959 ************************************ 00:05:12.959 END TEST accel_dif_generate_copy 00:05:12.959 ************************************ 00:05:12.959 00:09:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.959 00:09:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:12.959 00:09:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.959 00:09:04 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:12.959 00:09:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.959 00:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.959 ************************************ 00:05:12.959 START TEST accel_comp 00:05:12.959 ************************************ 00:05:12.959 00:09:04 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:12.959 [2024-07-16 00:09:04.226547] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:12.959 [2024-07-16 00:09:04.226614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172364 ] 00:05:12.959 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.959 [2024-07-16 00:09:04.287481] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.959 [2024-07-16 00:09:04.410674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.959 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.960 00:09:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.343 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:14.344 00:09:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.344 00:05:14.344 real 0m1.491s 00:05:14.344 user 0m1.341s 00:05:14.344 sys 0m0.152s 00:05:14.344 00:09:05 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.344 00:09:05 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:14.344 ************************************ 00:05:14.344 END TEST accel_comp 00:05:14.344 ************************************ 00:05:14.344 00:09:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.344 00:09:05 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.344 00:09:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:14.344 00:09:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.344 00:09:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.344 ************************************ 00:05:14.344 START TEST accel_decomp 00:05:14.344 ************************************ 00:05:14.344 00:09:05 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:14.344 00:09:05 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:14.344 [2024-07-16 00:09:05.764840] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:14.344 [2024-07-16 00:09:05.764916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172522 ] 00:05:14.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.344 [2024-07-16 00:09:05.826754] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.344 [2024-07-16 00:09:05.949918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.604 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.605 00:09:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:16.040 00:09:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.040 00:05:16.040 real 0m1.489s 00:05:16.040 user 0m1.341s 00:05:16.040 sys 0m0.150s 00:05:16.040 00:09:07 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.040 00:09:07 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:16.040 ************************************ 00:05:16.040 END TEST accel_decomp 00:05:16.040 ************************************ 00:05:16.040 00:09:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:16.040 00:09:07 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.040 00:09:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:16.040 00:09:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.040 00:09:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.040 ************************************ 00:05:16.040 START TEST accel_decomp_full 00:05:16.040 ************************************ 00:05:16.040 00:09:07 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:16.040 [2024-07-16 00:09:07.301589] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:16.040 [2024-07-16 00:09:07.301653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172734 ] 00:05:16.040 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.040 [2024-07-16 00:09:07.360091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.040 [2024-07-16 00:09:07.482805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.040 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.041 00:09:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.419 00:09:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.419 00:05:17.419 real 0m1.487s 00:05:17.419 user 0m1.344s 00:05:17.419 sys 0m0.146s 00:05:17.419 00:09:08 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.419 00:09:08 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:17.419 ************************************ 00:05:17.419 END TEST accel_decomp_full 00:05:17.419 ************************************ 00:05:17.419 00:09:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.419 00:09:08 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.419 00:09:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:17.419 00:09:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.419 00:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.419 ************************************ 00:05:17.419 START TEST accel_decomp_mcore 00:05:17.419 ************************************ 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:17.419 00:09:08 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:17.419 [2024-07-16 00:09:08.836548] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:17.419 [2024-07-16 00:09:08.836613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173390 ] 00:05:17.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.419 [2024-07-16 00:09:08.898016] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.419 [2024-07-16 00:09:09.022986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.419 [2024-07-16 00:09:09.023039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.419 [2024-07-16 00:09:09.023090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.419 [2024-07-16 00:09:09.023094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.678 00:09:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.055 00:05:19.055 real 0m1.494s 00:05:19.055 user 0m4.795s 00:05:19.055 sys 0m0.158s 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.055 00:09:10 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:19.055 ************************************ 00:05:19.055 END TEST accel_decomp_mcore 00:05:19.055 ************************************ 00:05:19.055 00:09:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.055 00:09:10 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:19.055 00:09:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:19.055 00:09:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.055 00:09:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.055 ************************************ 00:05:19.055 START TEST accel_decomp_full_mcore 00:05:19.055 ************************************ 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:19.055 [2024-07-16 00:09:10.375114] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:19.055 [2024-07-16 00:09:10.375177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173622 ] 00:05:19.055 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.055 [2024-07-16 00:09:10.441167] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.055 [2024-07-16 00:09:10.567495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.055 [2024-07-16 00:09:10.567551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.055 [2024-07-16 00:09:10.567602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.055 [2024-07-16 00:09:10.567606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:19.055 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.056 00:09:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.430 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.431 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.431 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.431 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.431 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.431 00:09:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.431 00:05:20.431 real 0m1.518s 00:05:20.431 user 0m4.859s 00:05:20.431 sys 0m0.170s 00:05:20.431 00:09:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.431 00:09:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:20.431 ************************************ 00:05:20.431 END TEST accel_decomp_full_mcore 00:05:20.431 ************************************ 00:05:20.431 00:09:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.431 00:09:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.431 00:09:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:20.431 00:09:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.431 00:09:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.431 ************************************ 00:05:20.431 START TEST accel_decomp_mthread 00:05:20.431 ************************************ 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:20.431 00:09:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:20.431 [2024-07-16 00:09:11.939388] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:20.431 [2024-07-16 00:09:11.939450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173789 ] 00:05:20.431 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.431 [2024-07-16 00:09:12.003139] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.691 [2024-07-16 00:09:12.125967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 00:09:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.070 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.071 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:22.071 00:09:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.071 00:05:22.071 real 0m1.493s 00:05:22.071 user 0m1.353s 00:05:22.071 sys 0m0.142s 00:05:22.071 00:09:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.071 00:09:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:22.071 ************************************ 00:05:22.071 END TEST accel_decomp_mthread 00:05:22.071 ************************************ 00:05:22.071 00:09:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.071 00:09:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:22.071 00:09:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:22.071 00:09:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.071 00:09:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.071 ************************************ 00:05:22.071 START TEST accel_decomp_full_mthread 00:05:22.071 ************************************ 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:22.071 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:22.071 [2024-07-16 00:09:13.482658] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:22.071 [2024-07-16 00:09:13.482732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174056 ] 00:05:22.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.071 [2024-07-16 00:09:13.546802] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.071 [2024-07-16 00:09:13.672493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.331 00:09:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.710 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.711 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.711 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.711 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.711 00:09:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.711 00:05:23.711 real 0m1.532s 00:05:23.711 user 0m1.387s 00:05:23.711 sys 0m0.147s 00:05:23.711 00:09:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.711 00:09:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:23.711 ************************************ 00:05:23.711 END TEST accel_decomp_full_mthread 00:05:23.711 ************************************ 00:05:23.711 00:09:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.711 00:09:15 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:23.711 00:09:15 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:23.711 00:09:15 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:23.711 00:09:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.711 00:09:15 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:23.711 00:09:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.711 00:09:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.711 00:09:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.711 00:09:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.711 00:09:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.711 00:09:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.711 00:09:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:23.711 00:09:15 accel -- accel/accel.sh@41 -- # jq -r . 00:05:23.711 ************************************ 00:05:23.711 START TEST accel_dif_functional_tests 00:05:23.711 ************************************ 00:05:23.711 00:09:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:23.711 [2024-07-16 00:09:15.085142] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:23.711 [2024-07-16 00:09:15.085219] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174220 ] 00:05:23.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.711 [2024-07-16 00:09:15.152091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.711 [2024-07-16 00:09:15.277113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.711 [2024-07-16 00:09:15.277167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.711 [2024-07-16 00:09:15.277171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.971 00:05:23.971 00:05:23.971 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.971 http://cunit.sourceforge.net/ 00:05:23.971 00:05:23.971 00:05:23.971 Suite: accel_dif 00:05:23.971 Test: verify: DIF generated, GUARD check ...passed 00:05:23.971 Test: verify: DIF generated, APPTAG check ...passed 00:05:23.971 Test: verify: DIF generated, REFTAG check ...passed 00:05:23.971 Test: verify: DIF not generated, GUARD check ...[2024-07-16 00:09:15.378351] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:23.971 passed 00:05:23.971 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 00:09:15.378426] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:23.971 passed 00:05:23.971 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 00:09:15.378472] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:23.971 passed 00:05:23.971 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:23.971 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 00:09:15.378546] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:23.971 passed 00:05:23.971 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:23.971 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:23.971 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:23.971 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 00:09:15.378704] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:23.971 passed 00:05:23.971 Test: verify copy: DIF generated, GUARD check ...passed 00:05:23.971 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:23.971 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:23.971 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 00:09:15.378892] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:23.971 passed 00:05:23.971 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 00:09:15.378940] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:23.971 passed 00:05:23.971 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 00:09:15.378979] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:23.971 passed 00:05:23.971 Test: generate copy: DIF generated, GUARD check ...passed 00:05:23.971 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:23.971 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:23.971 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:23.971 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:23.971 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:23.971 Test: generate copy: iovecs-len validate ...[2024-07-16 00:09:15.379234] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:23.971 passed 00:05:23.971 Test: generate copy: buffer alignment validate ...passed 00:05:23.971 00:05:23.971 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.971 suites 1 1 n/a 0 0 00:05:23.971 tests 26 26 26 0 0 00:05:23.971 asserts 115 115 115 0 n/a 00:05:23.971 00:05:23.971 Elapsed time = 0.003 seconds 00:05:24.230 00:05:24.230 real 0m0.600s 00:05:24.230 user 0m0.883s 00:05:24.230 sys 0m0.189s 00:05:24.230 00:09:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.230 00:09:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:24.230 ************************************ 00:05:24.230 END TEST accel_dif_functional_tests 00:05:24.230 ************************************ 00:05:24.230 00:09:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.230 00:05:24.230 real 0m33.586s 00:05:24.230 user 0m36.977s 00:05:24.230 sys 0m4.657s 00:05:24.230 00:09:15 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.230 00:09:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.230 ************************************ 00:05:24.230 END TEST accel 00:05:24.230 ************************************ 00:05:24.230 00:09:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.230 00:09:15 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:24.230 00:09:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.230 00:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.230 00:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:24.230 ************************************ 00:05:24.230 START TEST accel_rpc 00:05:24.230 ************************************ 00:05:24.230 00:09:15 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:24.230 * Looking for test storage... 00:05:24.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:24.230 00:09:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.230 00:09:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2174410 00:05:24.230 00:09:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:24.230 00:09:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2174410 00:05:24.230 00:09:15 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2174410 ']' 00:05:24.230 00:09:15 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.230 00:09:15 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.230 00:09:15 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.230 00:09:15 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.230 00:09:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.230 [2024-07-16 00:09:15.803153] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:24.230 [2024-07-16 00:09:15.803261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174410 ] 00:05:24.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.490 [2024-07-16 00:09:15.865184] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.490 [2024-07-16 00:09:15.983174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.429 00:09:16 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.429 00:09:16 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.429 00:09:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:25.429 00:09:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:25.429 00:09:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:25.429 00:09:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:25.429 00:09:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:25.429 00:09:16 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.429 00:09:16 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.429 00:09:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.429 ************************************ 00:05:25.429 START TEST accel_assign_opcode 00:05:25.429 ************************************ 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.429 [2024-07-16 00:09:16.825801] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.429 [2024-07-16 00:09:16.833813] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.429 00:09:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.689 software 00:05:25.689 00:05:25.689 real 0m0.292s 00:05:25.689 user 0m0.039s 00:05:25.689 sys 0m0.008s 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.689 00:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.689 ************************************ 00:05:25.689 END TEST accel_assign_opcode 00:05:25.689 ************************************ 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.689 00:09:17 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2174410 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2174410 ']' 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2174410 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2174410 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2174410' 00:05:25.689 killing process with pid 2174410 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@967 -- # kill 2174410 00:05:25.689 00:09:17 accel_rpc -- common/autotest_common.sh@972 -- # wait 2174410 00:05:26.258 00:05:26.258 real 0m1.943s 00:05:26.258 user 0m2.108s 00:05:26.258 sys 0m0.481s 00:05:26.258 00:09:17 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.258 00:09:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 ************************************ 00:05:26.258 END TEST accel_rpc 00:05:26.258 ************************************ 00:05:26.258 00:09:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.258 00:09:17 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:26.258 00:09:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.258 00:09:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.258 00:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 ************************************ 00:05:26.258 START TEST app_cmdline 00:05:26.258 ************************************ 00:05:26.258 00:09:17 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:26.258 * Looking for test storage... 00:05:26.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:26.258 00:09:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:26.258 00:09:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2174741 00:05:26.258 00:09:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:26.258 00:09:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2174741 00:05:26.258 00:09:17 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2174741 ']' 00:05:26.258 00:09:17 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.258 00:09:17 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.258 00:09:17 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.258 00:09:17 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.258 00:09:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 [2024-07-16 00:09:17.805074] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:26.258 [2024-07-16 00:09:17.805172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174741 ] 00:05:26.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.258 [2024-07-16 00:09:17.862642] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.517 [2024-07-16 00:09:17.969451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.776 00:09:18 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.776 00:09:18 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:26.777 00:09:18 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:27.035 { 00:05:27.035 "version": "SPDK v24.09-pre git sha1 47ca8c1aa", 00:05:27.035 "fields": { 00:05:27.035 "major": 24, 00:05:27.035 "minor": 9, 00:05:27.035 "patch": 0, 00:05:27.035 "suffix": "-pre", 00:05:27.035 "commit": "47ca8c1aa" 00:05:27.035 } 00:05:27.035 } 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:27.035 00:09:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:27.035 00:09:18 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.295 request: 00:05:27.295 { 00:05:27.295 "method": "env_dpdk_get_mem_stats", 00:05:27.295 "req_id": 1 00:05:27.295 } 00:05:27.295 Got JSON-RPC error response 00:05:27.295 response: 00:05:27.295 { 00:05:27.295 "code": -32601, 00:05:27.295 "message": "Method not found" 00:05:27.295 } 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.295 00:09:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2174741 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2174741 ']' 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2174741 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2174741 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2174741' 00:05:27.295 killing process with pid 2174741 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@967 -- # kill 2174741 00:05:27.295 00:09:18 app_cmdline -- common/autotest_common.sh@972 -- # wait 2174741 00:05:27.864 00:05:27.864 real 0m1.629s 00:05:27.864 user 0m2.004s 00:05:27.864 sys 0m0.459s 00:05:27.864 00:09:19 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.864 00:09:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 ************************************ 00:05:27.864 END TEST app_cmdline 00:05:27.864 ************************************ 00:05:27.864 00:09:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.864 00:09:19 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:27.864 00:09:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.864 00:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.864 00:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 ************************************ 00:05:27.864 START TEST version 00:05:27.864 ************************************ 00:05:27.864 00:09:19 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:27.864 * Looking for test storage... 00:05:27.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:27.864 00:09:19 version -- app/version.sh@17 -- # get_header_version major 00:05:27.864 00:09:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # cut -f2 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.864 00:09:19 version -- app/version.sh@17 -- # major=24 00:05:27.864 00:09:19 version -- app/version.sh@18 -- # get_header_version minor 00:05:27.864 00:09:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # cut -f2 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.864 00:09:19 version -- app/version.sh@18 -- # minor=9 00:05:27.864 00:09:19 version -- app/version.sh@19 -- # get_header_version patch 00:05:27.864 00:09:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # cut -f2 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.864 00:09:19 version -- app/version.sh@19 -- # patch=0 00:05:27.864 00:09:19 version -- app/version.sh@20 -- # get_header_version suffix 00:05:27.864 00:09:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # cut -f2 00:05:27.864 00:09:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.864 00:09:19 version -- app/version.sh@20 -- # suffix=-pre 00:05:27.864 00:09:19 version -- app/version.sh@22 -- # version=24.9 00:05:27.864 00:09:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:27.864 00:09:19 version -- app/version.sh@28 -- # version=24.9rc0 00:05:27.864 00:09:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:27.864 00:09:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:27.864 00:09:19 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:27.864 00:09:19 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:27.864 00:05:27.864 real 0m0.103s 00:05:27.864 user 0m0.060s 00:05:27.864 sys 0m0.066s 00:05:27.864 00:09:19 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.864 00:09:19 version -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 ************************************ 00:05:27.864 END TEST version 00:05:27.864 ************************************ 00:05:28.124 00:09:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.124 00:09:19 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@198 -- # uname -s 00:05:28.124 00:09:19 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:28.124 00:09:19 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:28.124 00:09:19 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:28.124 00:09:19 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:28.124 00:09:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.124 00:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 00:09:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:28.124 00:09:19 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:28.124 00:09:19 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:28.124 00:09:19 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:28.124 00:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.124 00:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 ************************************ 00:05:28.124 START TEST nvmf_tcp 00:05:28.124 ************************************ 00:05:28.124 00:09:19 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:28.124 * Looking for test storage... 00:05:28.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.124 00:09:19 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.124 00:09:19 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.124 00:09:19 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.124 00:09:19 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.124 00:09:19 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.124 00:09:19 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.124 00:09:19 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:28.124 00:09:19 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:28.124 00:09:19 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.124 00:09:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:28.124 00:09:19 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:28.124 00:09:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:28.124 00:09:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.124 00:09:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 ************************************ 00:05:28.124 START TEST nvmf_example 00:05:28.124 ************************************ 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:28.124 * Looking for test storage... 00:05:28.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.124 00:09:19 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:28.125 00:09:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:30.030 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:30.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:30.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:30.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:30.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:30.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:30.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:05:30.031 00:05:30.031 --- 10.0.0.2 ping statistics --- 00:05:30.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:30.031 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:30.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:30.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:05:30.031 00:05:30.031 --- 10.0.0.1 ping statistics --- 00:05:30.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:30.031 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:30.031 00:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2176643 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2176643 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2176643 ']' 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.305 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.584 00:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:30.584 00:09:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:30.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.796 Initializing NVMe Controllers 00:05:42.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:42.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:42.796 Initialization complete. Launching workers. 00:05:42.796 ======================================================== 00:05:42.796 Latency(us) 00:05:42.796 Device Information : IOPS MiB/s Average min max 00:05:42.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14933.50 58.33 4285.61 781.24 15654.74 00:05:42.796 ======================================================== 00:05:42.796 Total : 14933.50 58.33 4285.61 781.24 15654.74 00:05:42.796 00:05:42.796 00:09:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:42.796 00:09:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:42.796 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:42.796 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:42.796 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:42.797 rmmod nvme_tcp 00:05:42.797 rmmod nvme_fabrics 00:05:42.797 rmmod nvme_keyring 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2176643 ']' 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2176643 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2176643 ']' 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2176643 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2176643 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2176643' 00:05:42.797 killing process with pid 2176643 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2176643 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2176643 00:05:42.797 nvmf threads initialize successfully 00:05:42.797 bdev subsystem init successfully 00:05:42.797 created a nvmf target service 00:05:42.797 create targets's poll groups done 00:05:42.797 all subsystems of target started 00:05:42.797 nvmf target is running 00:05:42.797 all subsystems of target stopped 00:05:42.797 destroy targets's poll groups done 00:05:42.797 destroyed the nvmf target service 00:05:42.797 bdev subsystem finish successfully 00:05:42.797 nvmf threads destroy successfully 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:42.797 00:09:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.368 00:09:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:43.368 00:09:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:43.368 00:09:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.368 00:09:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:43.368 00:05:43.368 real 0m15.131s 00:05:43.368 user 0m41.762s 00:05:43.368 sys 0m3.489s 00:05:43.368 00:09:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.368 00:09:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:43.368 ************************************ 00:05:43.368 END TEST nvmf_example 00:05:43.368 ************************************ 00:05:43.368 00:09:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:43.368 00:09:34 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:43.368 00:09:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:43.368 00:09:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.368 00:09:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.368 ************************************ 00:05:43.368 START TEST nvmf_filesystem 00:05:43.368 ************************************ 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:43.368 * Looking for test storage... 00:05:43.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:43.368 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:43.369 #define SPDK_CONFIG_H 00:05:43.369 #define SPDK_CONFIG_APPS 1 00:05:43.369 #define SPDK_CONFIG_ARCH native 00:05:43.369 #undef SPDK_CONFIG_ASAN 00:05:43.369 #undef SPDK_CONFIG_AVAHI 00:05:43.369 #undef SPDK_CONFIG_CET 00:05:43.369 #define SPDK_CONFIG_COVERAGE 1 00:05:43.369 #define SPDK_CONFIG_CROSS_PREFIX 00:05:43.369 #undef SPDK_CONFIG_CRYPTO 00:05:43.369 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:43.369 #undef SPDK_CONFIG_CUSTOMOCF 00:05:43.369 #undef SPDK_CONFIG_DAOS 00:05:43.369 #define SPDK_CONFIG_DAOS_DIR 00:05:43.369 #define SPDK_CONFIG_DEBUG 1 00:05:43.369 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:43.369 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:43.369 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:43.369 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:43.369 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:43.369 #undef SPDK_CONFIG_DPDK_UADK 00:05:43.369 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:43.369 #define SPDK_CONFIG_EXAMPLES 1 00:05:43.369 #undef SPDK_CONFIG_FC 00:05:43.369 #define SPDK_CONFIG_FC_PATH 00:05:43.369 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:43.369 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:43.369 #undef SPDK_CONFIG_FUSE 00:05:43.369 #undef SPDK_CONFIG_FUZZER 00:05:43.369 #define SPDK_CONFIG_FUZZER_LIB 00:05:43.369 #undef SPDK_CONFIG_GOLANG 00:05:43.369 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:43.369 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:43.369 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:43.369 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:43.369 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:43.369 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:43.369 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:43.369 #define SPDK_CONFIG_IDXD 1 00:05:43.369 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:43.369 #undef SPDK_CONFIG_IPSEC_MB 00:05:43.369 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:43.369 #define SPDK_CONFIG_ISAL 1 00:05:43.369 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:43.369 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:43.369 #define SPDK_CONFIG_LIBDIR 00:05:43.369 #undef SPDK_CONFIG_LTO 00:05:43.369 #define SPDK_CONFIG_MAX_LCORES 128 00:05:43.369 #define SPDK_CONFIG_NVME_CUSE 1 00:05:43.369 #undef SPDK_CONFIG_OCF 00:05:43.369 #define SPDK_CONFIG_OCF_PATH 00:05:43.369 #define SPDK_CONFIG_OPENSSL_PATH 00:05:43.369 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:43.369 #define SPDK_CONFIG_PGO_DIR 00:05:43.369 #undef SPDK_CONFIG_PGO_USE 00:05:43.369 #define SPDK_CONFIG_PREFIX /usr/local 00:05:43.369 #undef SPDK_CONFIG_RAID5F 00:05:43.369 #undef SPDK_CONFIG_RBD 00:05:43.369 #define SPDK_CONFIG_RDMA 1 00:05:43.369 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:43.369 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:43.369 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:43.369 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:43.369 #define SPDK_CONFIG_SHARED 1 00:05:43.369 #undef SPDK_CONFIG_SMA 00:05:43.369 #define SPDK_CONFIG_TESTS 1 00:05:43.369 #undef SPDK_CONFIG_TSAN 00:05:43.369 #define SPDK_CONFIG_UBLK 1 00:05:43.369 #define SPDK_CONFIG_UBSAN 1 00:05:43.369 #undef SPDK_CONFIG_UNIT_TESTS 00:05:43.369 #undef SPDK_CONFIG_URING 00:05:43.369 #define SPDK_CONFIG_URING_PATH 00:05:43.369 #undef SPDK_CONFIG_URING_ZNS 00:05:43.369 #undef SPDK_CONFIG_USDT 00:05:43.369 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:43.369 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:43.369 #define SPDK_CONFIG_VFIO_USER 1 00:05:43.369 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:43.369 #define SPDK_CONFIG_VHOST 1 00:05:43.369 #define SPDK_CONFIG_VIRTIO 1 00:05:43.369 #undef SPDK_CONFIG_VTUNE 00:05:43.369 #define SPDK_CONFIG_VTUNE_DIR 00:05:43.369 #define SPDK_CONFIG_WERROR 1 00:05:43.369 #define SPDK_CONFIG_WPDK_DIR 00:05:43.369 #undef SPDK_CONFIG_XNVME 00:05:43.369 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:43.369 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:43.370 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2178344 ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2178344 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Lax0rm 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Lax0rm/tests/target /tmp/spdk.Lax0rm 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55543382016 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6451310592 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996021248 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1327104 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:43.371 * Looking for test storage... 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55543382016 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8665903104 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:43.371 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.372 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:43.631 00:09:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:45.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:45.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:45.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:45.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:45.542 00:09:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:45.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:45.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:05:45.542 00:05:45.542 --- 10.0.0.2 ping statistics --- 00:05:45.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.542 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:45.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:45.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:05:45.542 00:05:45.542 --- 10.0.0.1 ping statistics --- 00:05:45.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.542 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.542 ************************************ 00:05:45.542 START TEST nvmf_filesystem_no_in_capsule 00:05:45.542 ************************************ 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.542 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2179965 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2179965 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2179965 ']' 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.543 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.803 [2024-07-16 00:09:37.166858] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:45.803 [2024-07-16 00:09:37.166965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:45.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.803 [2024-07-16 00:09:37.230621] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.803 [2024-07-16 00:09:37.343634] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:45.803 [2024-07-16 00:09:37.343705] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:45.803 [2024-07-16 00:09:37.343718] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.803 [2024-07-16 00:09:37.343733] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.803 [2024-07-16 00:09:37.343742] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:45.803 [2024-07-16 00:09:37.343822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.803 [2024-07-16 00:09:37.343896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.803 [2024-07-16 00:09:37.343955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.803 [2024-07-16 00:09:37.343958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.061 [2024-07-16 00:09:37.502736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.061 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.062 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:46.062 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.062 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.062 Malloc1 00:05:46.062 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.062 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:46.062 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.062 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.320 [2024-07-16 00:09:37.690581] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:46.320 { 00:05:46.320 "name": "Malloc1", 00:05:46.320 "aliases": [ 00:05:46.320 "df4eccb6-17df-494d-9df2-5424686e8374" 00:05:46.320 ], 00:05:46.320 "product_name": "Malloc disk", 00:05:46.320 "block_size": 512, 00:05:46.320 "num_blocks": 1048576, 00:05:46.320 "uuid": "df4eccb6-17df-494d-9df2-5424686e8374", 00:05:46.320 "assigned_rate_limits": { 00:05:46.320 "rw_ios_per_sec": 0, 00:05:46.320 "rw_mbytes_per_sec": 0, 00:05:46.320 "r_mbytes_per_sec": 0, 00:05:46.320 "w_mbytes_per_sec": 0 00:05:46.320 }, 00:05:46.320 "claimed": true, 00:05:46.320 "claim_type": "exclusive_write", 00:05:46.320 "zoned": false, 00:05:46.320 "supported_io_types": { 00:05:46.320 "read": true, 00:05:46.320 "write": true, 00:05:46.320 "unmap": true, 00:05:46.320 "flush": true, 00:05:46.320 "reset": true, 00:05:46.320 "nvme_admin": false, 00:05:46.320 "nvme_io": false, 00:05:46.320 "nvme_io_md": false, 00:05:46.320 "write_zeroes": true, 00:05:46.320 "zcopy": true, 00:05:46.320 "get_zone_info": false, 00:05:46.320 "zone_management": false, 00:05:46.320 "zone_append": false, 00:05:46.320 "compare": false, 00:05:46.320 "compare_and_write": false, 00:05:46.320 "abort": true, 00:05:46.320 "seek_hole": false, 00:05:46.320 "seek_data": false, 00:05:46.320 "copy": true, 00:05:46.320 "nvme_iov_md": false 00:05:46.320 }, 00:05:46.320 "memory_domains": [ 00:05:46.320 { 00:05:46.320 "dma_device_id": "system", 00:05:46.320 "dma_device_type": 1 00:05:46.320 }, 00:05:46.320 { 00:05:46.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.320 "dma_device_type": 2 00:05:46.320 } 00:05:46.320 ], 00:05:46.320 "driver_specific": {} 00:05:46.320 } 00:05:46.320 ]' 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:46.320 00:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:46.888 00:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:46.888 00:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:46.888 00:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:46.888 00:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:46.888 00:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:49.426 00:09:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:49.686 00:09:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:50.623 ************************************ 00:05:50.623 START TEST filesystem_ext4 00:05:50.623 ************************************ 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:05:50.623 00:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:50.623 mke2fs 1.46.5 (30-Dec-2021) 00:05:50.623 Discarding device blocks: 0/522240 done 00:05:50.882 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:50.882 Filesystem UUID: ea504d4f-8ba8-43af-bbdf-3c558db87f59 00:05:50.882 Superblock backups stored on blocks: 00:05:50.882 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:50.882 00:05:50.882 Allocating group tables: 0/64 done 00:05:50.882 Writing inode tables: 0/64 done 00:05:53.413 Creating journal (8192 blocks): done 00:05:53.413 Writing superblocks and filesystem accounting information: 0/64 done 00:05:53.413 00:05:53.413 00:09:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:05:53.413 00:09:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2179965 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:53.671 00:05:53.671 real 0m3.140s 00:05:53.671 user 0m0.015s 00:05:53.671 sys 0m0.052s 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:53.671 ************************************ 00:05:53.671 END TEST filesystem_ext4 00:05:53.671 ************************************ 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:53.671 ************************************ 00:05:53.671 START TEST filesystem_btrfs 00:05:53.671 ************************************ 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:53.671 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:05:53.672 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:54.249 btrfs-progs v6.6.2 00:05:54.249 See https://btrfs.readthedocs.io for more information. 00:05:54.249 00:05:54.249 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:54.249 NOTE: several default settings have changed in version 5.15, please make sure 00:05:54.249 this does not affect your deployments: 00:05:54.249 - DUP for metadata (-m dup) 00:05:54.249 - enabled no-holes (-O no-holes) 00:05:54.249 - enabled free-space-tree (-R free-space-tree) 00:05:54.249 00:05:54.249 Label: (null) 00:05:54.249 UUID: 9c9c8cd5-07c8-4580-b1e3-b68e600b522b 00:05:54.249 Node size: 16384 00:05:54.249 Sector size: 4096 00:05:54.249 Filesystem size: 510.00MiB 00:05:54.249 Block group profiles: 00:05:54.249 Data: single 8.00MiB 00:05:54.249 Metadata: DUP 32.00MiB 00:05:54.249 System: DUP 8.00MiB 00:05:54.249 SSD detected: yes 00:05:54.249 Zoned device: no 00:05:54.249 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:54.249 Runtime features: free-space-tree 00:05:54.249 Checksum: crc32c 00:05:54.249 Number of devices: 1 00:05:54.249 Devices: 00:05:54.249 ID SIZE PATH 00:05:54.249 1 510.00MiB /dev/nvme0n1p1 00:05:54.249 00:05:54.249 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:05:54.249 00:09:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2179965 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:54.852 00:05:54.852 real 0m1.148s 00:05:54.852 user 0m0.022s 00:05:54.852 sys 0m0.109s 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:54.852 ************************************ 00:05:54.852 END TEST filesystem_btrfs 00:05:54.852 ************************************ 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.852 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.111 ************************************ 00:05:55.111 START TEST filesystem_xfs 00:05:55.111 ************************************ 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:05:55.111 00:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:55.111 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:55.111 = sectsz=512 attr=2, projid32bit=1 00:05:55.111 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:55.111 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:55.111 data = bsize=4096 blocks=130560, imaxpct=25 00:05:55.111 = sunit=0 swidth=0 blks 00:05:55.111 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:55.111 log =internal log bsize=4096 blocks=16384, version=2 00:05:55.111 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:55.111 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:56.045 Discarding blocks...Done. 00:05:56.045 00:09:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:05:56.045 00:09:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2179965 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:57.953 00:05:57.953 real 0m3.042s 00:05:57.953 user 0m0.014s 00:05:57.953 sys 0m0.067s 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:05:57.953 ************************************ 00:05:57.953 END TEST filesystem_xfs 00:05:57.953 ************************************ 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:57.953 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:58.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2179965 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2179965 ']' 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2179965 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2179965 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2179965' 00:05:58.212 killing process with pid 2179965 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2179965 00:05:58.212 00:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2179965 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:05:58.781 00:05:58.781 real 0m13.166s 00:05:58.781 user 0m50.452s 00:05:58.781 sys 0m1.828s 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.781 ************************************ 00:05:58.781 END TEST nvmf_filesystem_no_in_capsule 00:05:58.781 ************************************ 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.781 ************************************ 00:05:58.781 START TEST nvmf_filesystem_in_capsule 00:05:58.781 ************************************ 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2181678 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2181678 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2181678 ']' 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.781 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.781 [2024-07-16 00:09:50.387799] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:05:58.781 [2024-07-16 00:09:50.387896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:59.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.039 [2024-07-16 00:09:50.460809] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.039 [2024-07-16 00:09:50.586523] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:59.039 [2024-07-16 00:09:50.586592] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:59.039 [2024-07-16 00:09:50.586608] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.039 [2024-07-16 00:09:50.586622] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.039 [2024-07-16 00:09:50.586634] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:59.039 [2024-07-16 00:09:50.586697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.039 [2024-07-16 00:09:50.586753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.039 [2024-07-16 00:09:50.586805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.039 [2024-07-16 00:09:50.586808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.299 [2024-07-16 00:09:50.741832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.299 Malloc1 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.299 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 [2024-07-16 00:09:50.923311] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:59.562 { 00:05:59.562 "name": "Malloc1", 00:05:59.562 "aliases": [ 00:05:59.562 "44d6cca9-3ddd-4767-8ca3-684cf33da65c" 00:05:59.562 ], 00:05:59.562 "product_name": "Malloc disk", 00:05:59.562 "block_size": 512, 00:05:59.562 "num_blocks": 1048576, 00:05:59.562 "uuid": "44d6cca9-3ddd-4767-8ca3-684cf33da65c", 00:05:59.562 "assigned_rate_limits": { 00:05:59.562 "rw_ios_per_sec": 0, 00:05:59.562 "rw_mbytes_per_sec": 0, 00:05:59.562 "r_mbytes_per_sec": 0, 00:05:59.562 "w_mbytes_per_sec": 0 00:05:59.562 }, 00:05:59.562 "claimed": true, 00:05:59.562 "claim_type": "exclusive_write", 00:05:59.562 "zoned": false, 00:05:59.562 "supported_io_types": { 00:05:59.562 "read": true, 00:05:59.562 "write": true, 00:05:59.562 "unmap": true, 00:05:59.562 "flush": true, 00:05:59.562 "reset": true, 00:05:59.562 "nvme_admin": false, 00:05:59.562 "nvme_io": false, 00:05:59.562 "nvme_io_md": false, 00:05:59.562 "write_zeroes": true, 00:05:59.562 "zcopy": true, 00:05:59.562 "get_zone_info": false, 00:05:59.562 "zone_management": false, 00:05:59.562 "zone_append": false, 00:05:59.562 "compare": false, 00:05:59.562 "compare_and_write": false, 00:05:59.562 "abort": true, 00:05:59.562 "seek_hole": false, 00:05:59.562 "seek_data": false, 00:05:59.562 "copy": true, 00:05:59.562 "nvme_iov_md": false 00:05:59.562 }, 00:05:59.562 "memory_domains": [ 00:05:59.562 { 00:05:59.562 "dma_device_id": "system", 00:05:59.562 "dma_device_type": 1 00:05:59.562 }, 00:05:59.562 { 00:05:59.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.562 "dma_device_type": 2 00:05:59.562 } 00:05:59.562 ], 00:05:59.562 "driver_specific": {} 00:05:59.562 } 00:05:59.562 ]' 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:59.562 00:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:59.562 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:59.562 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:59.562 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:59.562 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:59.562 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:00.131 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:00.131 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:00.131 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:00.131 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:00.131 00:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:02.663 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:02.663 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:02.663 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:02.663 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:02.663 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:02.664 00:09:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:02.664 00:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:03.228 00:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.160 ************************************ 00:06:04.160 START TEST filesystem_in_capsule_ext4 00:06:04.160 ************************************ 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:04.160 00:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:04.160 mke2fs 1.46.5 (30-Dec-2021) 00:06:04.418 Discarding device blocks: 0/522240 done 00:06:04.418 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:04.418 Filesystem UUID: 14a280cd-0823-406b-9882-e1a664fe8fa6 00:06:04.418 Superblock backups stored on blocks: 00:06:04.418 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:04.418 00:06:04.418 Allocating group tables: 0/64 done 00:06:04.418 Writing inode tables: 0/64 done 00:06:06.958 Creating journal (8192 blocks): done 00:06:06.958 Writing superblocks and filesystem accounting information: 0/64 done 00:06:06.958 00:06:06.958 00:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:06.958 00:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2181678 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:07.898 00:06:07.898 real 0m3.585s 00:06:07.898 user 0m0.031s 00:06:07.898 sys 0m0.048s 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:07.898 ************************************ 00:06:07.898 END TEST filesystem_in_capsule_ext4 00:06:07.898 ************************************ 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.898 ************************************ 00:06:07.898 START TEST filesystem_in_capsule_btrfs 00:06:07.898 ************************************ 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:07.898 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:08.158 btrfs-progs v6.6.2 00:06:08.158 See https://btrfs.readthedocs.io for more information. 00:06:08.159 00:06:08.159 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:08.159 NOTE: several default settings have changed in version 5.15, please make sure 00:06:08.159 this does not affect your deployments: 00:06:08.159 - DUP for metadata (-m dup) 00:06:08.159 - enabled no-holes (-O no-holes) 00:06:08.159 - enabled free-space-tree (-R free-space-tree) 00:06:08.159 00:06:08.159 Label: (null) 00:06:08.159 UUID: 074295d5-3b09-4800-8f96-18e62ab57985 00:06:08.159 Node size: 16384 00:06:08.159 Sector size: 4096 00:06:08.159 Filesystem size: 510.00MiB 00:06:08.159 Block group profiles: 00:06:08.159 Data: single 8.00MiB 00:06:08.159 Metadata: DUP 32.00MiB 00:06:08.159 System: DUP 8.00MiB 00:06:08.159 SSD detected: yes 00:06:08.159 Zoned device: no 00:06:08.159 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:08.159 Runtime features: free-space-tree 00:06:08.159 Checksum: crc32c 00:06:08.159 Number of devices: 1 00:06:08.159 Devices: 00:06:08.159 ID SIZE PATH 00:06:08.159 1 510.00MiB /dev/nvme0n1p1 00:06:08.159 00:06:08.159 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:08.159 00:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2181678 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:09.094 00:06:09.094 real 0m1.221s 00:06:09.094 user 0m0.020s 00:06:09.094 sys 0m0.114s 00:06:09.094 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:09.095 ************************************ 00:06:09.095 END TEST filesystem_in_capsule_btrfs 00:06:09.095 ************************************ 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.095 ************************************ 00:06:09.095 START TEST filesystem_in_capsule_xfs 00:06:09.095 ************************************ 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:09.095 00:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:09.095 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:09.095 = sectsz=512 attr=2, projid32bit=1 00:06:09.095 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:09.095 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:09.095 data = bsize=4096 blocks=130560, imaxpct=25 00:06:09.095 = sunit=0 swidth=0 blks 00:06:09.095 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:09.095 log =internal log bsize=4096 blocks=16384, version=2 00:06:09.095 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:09.095 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:10.030 Discarding blocks...Done. 00:06:10.030 00:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:10.030 00:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2181678 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.972 00:06:11.972 real 0m2.599s 00:06:11.972 user 0m0.012s 00:06:11.972 sys 0m0.065s 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:11.972 ************************************ 00:06:11.972 END TEST filesystem_in_capsule_xfs 00:06:11.972 ************************************ 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:11.972 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:12.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2181678 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2181678 ']' 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2181678 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2181678 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2181678' 00:06:12.234 killing process with pid 2181678 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2181678 00:06:12.234 00:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2181678 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:12.803 00:06:12.803 real 0m13.842s 00:06:12.803 user 0m53.099s 00:06:12.803 sys 0m1.920s 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.803 ************************************ 00:06:12.803 END TEST nvmf_filesystem_in_capsule 00:06:12.803 ************************************ 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:12.803 rmmod nvme_tcp 00:06:12.803 rmmod nvme_fabrics 00:06:12.803 rmmod nvme_keyring 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:12.803 00:10:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.707 00:10:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:14.707 00:06:14.707 real 0m31.485s 00:06:14.707 user 1m44.446s 00:06:14.707 sys 0m5.329s 00:06:14.707 00:10:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.707 00:10:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.707 ************************************ 00:06:14.707 END TEST nvmf_filesystem 00:06:14.707 ************************************ 00:06:14.966 00:10:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:14.966 00:10:06 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:14.966 00:10:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:14.966 00:10:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.966 00:10:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.966 ************************************ 00:06:14.966 START TEST nvmf_target_discovery 00:06:14.966 ************************************ 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:14.966 * Looking for test storage... 00:06:14.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:14.966 00:10:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.872 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.872 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:16.872 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:16.872 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:16.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:16.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:16.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:16.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.873 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:17.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:06:17.132 00:06:17.132 --- 10.0.0.2 ping statistics --- 00:06:17.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.132 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:06:17.132 00:06:17.132 --- 10.0.0.1 ping statistics --- 00:06:17.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.132 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2185437 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2185437 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2185437 ']' 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.132 00:10:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.132 [2024-07-16 00:10:08.591831] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:06:17.132 [2024-07-16 00:10:08.591951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.132 [2024-07-16 00:10:08.662923] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.391 [2024-07-16 00:10:08.784097] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.391 [2024-07-16 00:10:08.784154] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.391 [2024-07-16 00:10:08.784179] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.391 [2024-07-16 00:10:08.784192] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.391 [2024-07-16 00:10:08.784203] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.391 [2024-07-16 00:10:08.784277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.391 [2024-07-16 00:10:08.784331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.391 [2024-07-16 00:10:08.784385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.391 [2024-07-16 00:10:08.784388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.958 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.958 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:17.958 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:17.958 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.958 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 [2024-07-16 00:10:09.599705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 Null1 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 [2024-07-16 00:10:09.640016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 Null2 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 Null3 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 Null4 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.217 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.218 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.218 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:18.476 00:06:18.476 Discovery Log Number of Records 6, Generation counter 6 00:06:18.476 =====Discovery Log Entry 0====== 00:06:18.476 trtype: tcp 00:06:18.476 adrfam: ipv4 00:06:18.476 subtype: current discovery subsystem 00:06:18.476 treq: not required 00:06:18.476 portid: 0 00:06:18.476 trsvcid: 4420 00:06:18.476 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:18.476 traddr: 10.0.0.2 00:06:18.476 eflags: explicit discovery connections, duplicate discovery information 00:06:18.476 sectype: none 00:06:18.476 =====Discovery Log Entry 1====== 00:06:18.476 trtype: tcp 00:06:18.476 adrfam: ipv4 00:06:18.476 subtype: nvme subsystem 00:06:18.476 treq: not required 00:06:18.476 portid: 0 00:06:18.476 trsvcid: 4420 00:06:18.476 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:18.476 traddr: 10.0.0.2 00:06:18.476 eflags: none 00:06:18.476 sectype: none 00:06:18.476 =====Discovery Log Entry 2====== 00:06:18.476 trtype: tcp 00:06:18.476 adrfam: ipv4 00:06:18.476 subtype: nvme subsystem 00:06:18.476 treq: not required 00:06:18.476 portid: 0 00:06:18.476 trsvcid: 4420 00:06:18.476 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:18.476 traddr: 10.0.0.2 00:06:18.476 eflags: none 00:06:18.476 sectype: none 00:06:18.476 =====Discovery Log Entry 3====== 00:06:18.476 trtype: tcp 00:06:18.476 adrfam: ipv4 00:06:18.476 subtype: nvme subsystem 00:06:18.476 treq: not required 00:06:18.476 portid: 0 00:06:18.476 trsvcid: 4420 00:06:18.476 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:18.476 traddr: 10.0.0.2 00:06:18.476 eflags: none 00:06:18.476 sectype: none 00:06:18.476 =====Discovery Log Entry 4====== 00:06:18.476 trtype: tcp 00:06:18.476 adrfam: ipv4 00:06:18.476 subtype: nvme subsystem 00:06:18.476 treq: not required 00:06:18.476 portid: 0 00:06:18.476 trsvcid: 4420 00:06:18.476 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:18.476 traddr: 10.0.0.2 00:06:18.476 eflags: none 00:06:18.476 sectype: none 00:06:18.476 =====Discovery Log Entry 5====== 00:06:18.476 trtype: tcp 00:06:18.476 adrfam: ipv4 00:06:18.476 subtype: discovery subsystem referral 00:06:18.476 treq: not required 00:06:18.476 portid: 0 00:06:18.476 trsvcid: 4430 00:06:18.476 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:18.476 traddr: 10.0.0.2 00:06:18.476 eflags: none 00:06:18.476 sectype: none 00:06:18.476 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:18.476 Perform nvmf subsystem discovery via RPC 00:06:18.476 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:18.476 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.476 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.476 [ 00:06:18.476 { 00:06:18.476 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:18.476 "subtype": "Discovery", 00:06:18.476 "listen_addresses": [ 00:06:18.476 { 00:06:18.476 "trtype": "TCP", 00:06:18.476 "adrfam": "IPv4", 00:06:18.476 "traddr": "10.0.0.2", 00:06:18.476 "trsvcid": "4420" 00:06:18.476 } 00:06:18.476 ], 00:06:18.476 "allow_any_host": true, 00:06:18.476 "hosts": [] 00:06:18.476 }, 00:06:18.476 { 00:06:18.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:18.476 "subtype": "NVMe", 00:06:18.476 "listen_addresses": [ 00:06:18.476 { 00:06:18.476 "trtype": "TCP", 00:06:18.476 "adrfam": "IPv4", 00:06:18.476 "traddr": "10.0.0.2", 00:06:18.476 "trsvcid": "4420" 00:06:18.476 } 00:06:18.476 ], 00:06:18.476 "allow_any_host": true, 00:06:18.476 "hosts": [], 00:06:18.476 "serial_number": "SPDK00000000000001", 00:06:18.476 "model_number": "SPDK bdev Controller", 00:06:18.476 "max_namespaces": 32, 00:06:18.476 "min_cntlid": 1, 00:06:18.476 "max_cntlid": 65519, 00:06:18.476 "namespaces": [ 00:06:18.476 { 00:06:18.476 "nsid": 1, 00:06:18.476 "bdev_name": "Null1", 00:06:18.476 "name": "Null1", 00:06:18.477 "nguid": "A1505E33A9A84ADABCCB40CACB906B7A", 00:06:18.477 "uuid": "a1505e33-a9a8-4ada-bccb-40cacb906b7a" 00:06:18.477 } 00:06:18.477 ] 00:06:18.477 }, 00:06:18.477 { 00:06:18.477 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:18.477 "subtype": "NVMe", 00:06:18.477 "listen_addresses": [ 00:06:18.477 { 00:06:18.477 "trtype": "TCP", 00:06:18.477 "adrfam": "IPv4", 00:06:18.477 "traddr": "10.0.0.2", 00:06:18.477 "trsvcid": "4420" 00:06:18.477 } 00:06:18.477 ], 00:06:18.477 "allow_any_host": true, 00:06:18.477 "hosts": [], 00:06:18.477 "serial_number": "SPDK00000000000002", 00:06:18.477 "model_number": "SPDK bdev Controller", 00:06:18.477 "max_namespaces": 32, 00:06:18.477 "min_cntlid": 1, 00:06:18.477 "max_cntlid": 65519, 00:06:18.477 "namespaces": [ 00:06:18.477 { 00:06:18.477 "nsid": 1, 00:06:18.477 "bdev_name": "Null2", 00:06:18.477 "name": "Null2", 00:06:18.477 "nguid": "124B9EFA590B447E915864114BD144CB", 00:06:18.477 "uuid": "124b9efa-590b-447e-9158-64114bd144cb" 00:06:18.477 } 00:06:18.477 ] 00:06:18.477 }, 00:06:18.477 { 00:06:18.477 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:18.477 "subtype": "NVMe", 00:06:18.477 "listen_addresses": [ 00:06:18.477 { 00:06:18.477 "trtype": "TCP", 00:06:18.477 "adrfam": "IPv4", 00:06:18.477 "traddr": "10.0.0.2", 00:06:18.477 "trsvcid": "4420" 00:06:18.477 } 00:06:18.477 ], 00:06:18.477 "allow_any_host": true, 00:06:18.477 "hosts": [], 00:06:18.477 "serial_number": "SPDK00000000000003", 00:06:18.477 "model_number": "SPDK bdev Controller", 00:06:18.477 "max_namespaces": 32, 00:06:18.477 "min_cntlid": 1, 00:06:18.477 "max_cntlid": 65519, 00:06:18.477 "namespaces": [ 00:06:18.477 { 00:06:18.477 "nsid": 1, 00:06:18.477 "bdev_name": "Null3", 00:06:18.477 "name": "Null3", 00:06:18.477 "nguid": "AC4DB2C3C8E34F07AE0B727D4269C2CB", 00:06:18.477 "uuid": "ac4db2c3-c8e3-4f07-ae0b-727d4269c2cb" 00:06:18.477 } 00:06:18.477 ] 00:06:18.477 }, 00:06:18.477 { 00:06:18.477 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:18.477 "subtype": "NVMe", 00:06:18.477 "listen_addresses": [ 00:06:18.477 { 00:06:18.477 "trtype": "TCP", 00:06:18.477 "adrfam": "IPv4", 00:06:18.477 "traddr": "10.0.0.2", 00:06:18.477 "trsvcid": "4420" 00:06:18.477 } 00:06:18.477 ], 00:06:18.477 "allow_any_host": true, 00:06:18.477 "hosts": [], 00:06:18.477 "serial_number": "SPDK00000000000004", 00:06:18.477 "model_number": "SPDK bdev Controller", 00:06:18.477 "max_namespaces": 32, 00:06:18.477 "min_cntlid": 1, 00:06:18.477 "max_cntlid": 65519, 00:06:18.477 "namespaces": [ 00:06:18.477 { 00:06:18.477 "nsid": 1, 00:06:18.477 "bdev_name": "Null4", 00:06:18.477 "name": "Null4", 00:06:18.477 "nguid": "09AF2C019F6B41F4A823959FF7A247ED", 00:06:18.477 "uuid": "09af2c01-9f6b-41f4-a823-959ff7a247ed" 00:06:18.477 } 00:06:18.477 ] 00:06:18.477 } 00:06:18.477 ] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:18.477 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:18.478 00:10:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:18.478 rmmod nvme_tcp 00:06:18.478 rmmod nvme_fabrics 00:06:18.478 rmmod nvme_keyring 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2185437 ']' 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2185437 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2185437 ']' 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2185437 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2185437 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2185437' 00:06:18.478 killing process with pid 2185437 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2185437 00:06:18.478 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2185437 00:06:18.735 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:18.735 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:18.735 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:18.735 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:18.736 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:18.736 00:10:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.736 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.736 00:10:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.273 00:10:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:21.273 00:06:21.273 real 0m6.010s 00:06:21.273 user 0m7.052s 00:06:21.273 sys 0m1.781s 00:06:21.273 00:10:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.273 00:10:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.273 ************************************ 00:06:21.273 END TEST nvmf_target_discovery 00:06:21.273 ************************************ 00:06:21.273 00:10:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:21.273 00:10:12 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:21.273 00:10:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.273 00:10:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.273 00:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.273 ************************************ 00:06:21.273 START TEST nvmf_referrals 00:06:21.273 ************************************ 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:21.273 * Looking for test storage... 00:06:21.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.273 00:10:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:21.274 00:10:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.180 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.180 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:23.180 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:23.180 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:23.180 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:23.180 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:23.181 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:23.181 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:23.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:23.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:23.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:06:23.181 00:06:23.181 --- 10.0.0.2 ping statistics --- 00:06:23.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.181 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:06:23.181 00:06:23.181 --- 10.0.0.1 ping statistics --- 00:06:23.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.181 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2187538 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2187538 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2187538 ']' 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.181 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.181 [2024-07-16 00:10:14.570227] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:06:23.181 [2024-07-16 00:10:14.570304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.181 [2024-07-16 00:10:14.642920] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.181 [2024-07-16 00:10:14.768271] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.181 [2024-07-16 00:10:14.768339] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.181 [2024-07-16 00:10:14.768354] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.181 [2024-07-16 00:10:14.768369] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.182 [2024-07-16 00:10:14.768380] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.182 [2024-07-16 00:10:14.768443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.182 [2024-07-16 00:10:14.768501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.182 [2024-07-16 00:10:14.768554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.182 [2024-07-16 00:10:14.768557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 [2024-07-16 00:10:14.921639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 [2024-07-16 00:10:14.933824] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:23.441 00:10:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.441 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.700 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.958 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:24.219 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:24.219 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:24.219 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:24.219 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:24.219 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.219 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:24.478 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:24.479 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:24.479 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:24.479 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:24.479 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:24.479 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.479 00:10:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:24.479 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:24.479 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:24.479 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:24.479 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:24.479 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.479 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:24.737 rmmod nvme_tcp 00:06:24.737 rmmod nvme_fabrics 00:06:24.737 rmmod nvme_keyring 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2187538 ']' 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2187538 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2187538 ']' 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2187538 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:24.737 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.738 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2187538 00:06:24.998 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.998 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.998 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2187538' 00:06:24.998 killing process with pid 2187538 00:06:24.998 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2187538 00:06:24.998 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2187538 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.257 00:10:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.164 00:10:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:27.164 00:06:27.164 real 0m6.273s 00:06:27.164 user 0m8.716s 00:06:27.164 sys 0m2.012s 00:06:27.164 00:10:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.164 00:10:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.164 ************************************ 00:06:27.164 END TEST nvmf_referrals 00:06:27.164 ************************************ 00:06:27.164 00:10:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:27.164 00:10:18 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:27.164 00:10:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:27.164 00:10:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.164 00:10:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.164 ************************************ 00:06:27.164 START TEST nvmf_connect_disconnect 00:06:27.164 ************************************ 00:06:27.164 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:27.428 * Looking for test storage... 00:06:27.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:27.428 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.429 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.429 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.429 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:27.429 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:27.429 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:27.429 00:10:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:29.338 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:29.338 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:29.338 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:29.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:29.338 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:29.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:06:29.339 00:06:29.339 --- 10.0.0.2 ping statistics --- 00:06:29.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.339 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:06:29.339 00:06:29.339 --- 10.0.0.1 ping statistics --- 00:06:29.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.339 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2189828 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2189828 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2189828 ']' 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.339 00:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.599 [2024-07-16 00:10:20.987525] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:06:29.599 [2024-07-16 00:10:20.987607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.599 [2024-07-16 00:10:21.053590] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.599 [2024-07-16 00:10:21.168346] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.599 [2024-07-16 00:10:21.168401] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.599 [2024-07-16 00:10:21.168415] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.599 [2024-07-16 00:10:21.168425] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.599 [2024-07-16 00:10:21.168434] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.599 [2024-07-16 00:10:21.168517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.599 [2024-07-16 00:10:21.168582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.599 [2024-07-16 00:10:21.168648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.599 [2024-07-16 00:10:21.168651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 [2024-07-16 00:10:21.329744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 [2024-07-16 00:10:21.391004] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:29.859 00:10:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:33.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:35.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:38.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:44.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:44.043 rmmod nvme_tcp 00:06:44.043 rmmod nvme_fabrics 00:06:44.043 rmmod nvme_keyring 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2189828 ']' 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2189828 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2189828 ']' 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2189828 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2189828 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2189828' 00:06:44.043 killing process with pid 2189828 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2189828 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2189828 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.043 00:10:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.582 00:10:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:46.582 00:06:46.582 real 0m18.907s 00:06:46.582 user 0m56.938s 00:06:46.582 sys 0m3.439s 00:06:46.582 00:10:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.582 00:10:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 ************************************ 00:06:46.582 END TEST nvmf_connect_disconnect 00:06:46.582 ************************************ 00:06:46.582 00:10:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:46.582 00:10:37 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:46.582 00:10:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:46.582 00:10:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.582 00:10:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 ************************************ 00:06:46.582 START TEST nvmf_multitarget 00:06:46.582 ************************************ 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:46.582 * Looking for test storage... 00:06:46.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:46.582 00:10:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:48.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:48.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.505 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:48.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:48.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:06:48.506 00:06:48.506 --- 10.0.0.2 ping statistics --- 00:06:48.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.506 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:06:48.506 00:06:48.506 --- 10.0.0.1 ping statistics --- 00:06:48.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.506 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2193472 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2193472 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2193472 ']' 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.506 00:10:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:48.506 [2024-07-16 00:10:39.892439] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:06:48.506 [2024-07-16 00:10:39.892534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.506 [2024-07-16 00:10:39.971592] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.506 [2024-07-16 00:10:40.104408] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.506 [2024-07-16 00:10:40.104465] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.506 [2024-07-16 00:10:40.104482] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.506 [2024-07-16 00:10:40.104496] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.506 [2024-07-16 00:10:40.104508] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.506 [2024-07-16 00:10:40.104590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.506 [2024-07-16 00:10:40.104644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.506 [2024-07-16 00:10:40.104669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.506 [2024-07-16 00:10:40.104672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.781 00:10:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:48.782 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:49.040 "nvmf_tgt_1" 00:06:49.040 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:49.040 "nvmf_tgt_2" 00:06:49.040 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:49.040 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:49.298 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:49.298 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:49.298 true 00:06:49.298 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:49.557 true 00:06:49.557 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:49.557 00:10:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:49.557 rmmod nvme_tcp 00:06:49.557 rmmod nvme_fabrics 00:06:49.557 rmmod nvme_keyring 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2193472 ']' 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2193472 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2193472 ']' 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2193472 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2193472 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2193472' 00:06:49.557 killing process with pid 2193472 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2193472 00:06:49.557 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2193472 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.815 00:10:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.350 00:10:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:52.350 00:06:52.350 real 0m5.748s 00:06:52.350 user 0m6.541s 00:06:52.350 sys 0m1.855s 00:06:52.350 00:10:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.350 00:10:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:52.350 ************************************ 00:06:52.350 END TEST nvmf_multitarget 00:06:52.350 ************************************ 00:06:52.350 00:10:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:52.350 00:10:43 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:52.350 00:10:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:52.350 00:10:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.350 00:10:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.350 ************************************ 00:06:52.350 START TEST nvmf_rpc 00:06:52.350 ************************************ 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:52.350 * Looking for test storage... 00:06:52.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.350 00:10:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:52.351 00:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:06:54.254 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:54.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:54.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:54.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:54.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:06:54.255 00:06:54.255 --- 10.0.0.2 ping statistics --- 00:06:54.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.255 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:06:54.255 00:06:54.255 --- 10.0.0.1 ping statistics --- 00:06:54.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.255 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.255 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2195696 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2195696 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2195696 ']' 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.256 00:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.514 [2024-07-16 00:10:45.883639] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:06:54.515 [2024-07-16 00:10:45.883720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.515 [2024-07-16 00:10:45.953911] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.515 [2024-07-16 00:10:46.074412] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.515 [2024-07-16 00:10:46.074477] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.515 [2024-07-16 00:10:46.074493] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.515 [2024-07-16 00:10:46.074506] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.515 [2024-07-16 00:10:46.074517] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.515 [2024-07-16 00:10:46.074613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.515 [2024-07-16 00:10:46.074667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.515 [2024-07-16 00:10:46.074728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.515 [2024-07-16 00:10:46.074731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:55.448 "tick_rate": 2700000000, 00:06:55.448 "poll_groups": [ 00:06:55.448 { 00:06:55.448 "name": "nvmf_tgt_poll_group_000", 00:06:55.448 "admin_qpairs": 0, 00:06:55.448 "io_qpairs": 0, 00:06:55.448 "current_admin_qpairs": 0, 00:06:55.448 "current_io_qpairs": 0, 00:06:55.448 "pending_bdev_io": 0, 00:06:55.448 "completed_nvme_io": 0, 00:06:55.448 "transports": [] 00:06:55.448 }, 00:06:55.448 { 00:06:55.448 "name": "nvmf_tgt_poll_group_001", 00:06:55.448 "admin_qpairs": 0, 00:06:55.448 "io_qpairs": 0, 00:06:55.448 "current_admin_qpairs": 0, 00:06:55.448 "current_io_qpairs": 0, 00:06:55.448 "pending_bdev_io": 0, 00:06:55.448 "completed_nvme_io": 0, 00:06:55.448 "transports": [] 00:06:55.448 }, 00:06:55.448 { 00:06:55.448 "name": "nvmf_tgt_poll_group_002", 00:06:55.448 "admin_qpairs": 0, 00:06:55.448 "io_qpairs": 0, 00:06:55.448 "current_admin_qpairs": 0, 00:06:55.448 "current_io_qpairs": 0, 00:06:55.448 "pending_bdev_io": 0, 00:06:55.448 "completed_nvme_io": 0, 00:06:55.448 "transports": [] 00:06:55.448 }, 00:06:55.448 { 00:06:55.448 "name": "nvmf_tgt_poll_group_003", 00:06:55.448 "admin_qpairs": 0, 00:06:55.448 "io_qpairs": 0, 00:06:55.448 "current_admin_qpairs": 0, 00:06:55.448 "current_io_qpairs": 0, 00:06:55.448 "pending_bdev_io": 0, 00:06:55.448 "completed_nvme_io": 0, 00:06:55.448 "transports": [] 00:06:55.448 } 00:06:55.448 ] 00:06:55.448 }' 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.448 [2024-07-16 00:10:46.947339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.448 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:55.448 "tick_rate": 2700000000, 00:06:55.448 "poll_groups": [ 00:06:55.448 { 00:06:55.448 "name": "nvmf_tgt_poll_group_000", 00:06:55.448 "admin_qpairs": 0, 00:06:55.448 "io_qpairs": 0, 00:06:55.448 "current_admin_qpairs": 0, 00:06:55.448 "current_io_qpairs": 0, 00:06:55.448 "pending_bdev_io": 0, 00:06:55.448 "completed_nvme_io": 0, 00:06:55.448 "transports": [ 00:06:55.448 { 00:06:55.448 "trtype": "TCP" 00:06:55.448 } 00:06:55.448 ] 00:06:55.448 }, 00:06:55.448 { 00:06:55.448 "name": "nvmf_tgt_poll_group_001", 00:06:55.449 "admin_qpairs": 0, 00:06:55.449 "io_qpairs": 0, 00:06:55.449 "current_admin_qpairs": 0, 00:06:55.449 "current_io_qpairs": 0, 00:06:55.449 "pending_bdev_io": 0, 00:06:55.449 "completed_nvme_io": 0, 00:06:55.449 "transports": [ 00:06:55.449 { 00:06:55.449 "trtype": "TCP" 00:06:55.449 } 00:06:55.449 ] 00:06:55.449 }, 00:06:55.449 { 00:06:55.449 "name": "nvmf_tgt_poll_group_002", 00:06:55.449 "admin_qpairs": 0, 00:06:55.449 "io_qpairs": 0, 00:06:55.449 "current_admin_qpairs": 0, 00:06:55.449 "current_io_qpairs": 0, 00:06:55.449 "pending_bdev_io": 0, 00:06:55.449 "completed_nvme_io": 0, 00:06:55.449 "transports": [ 00:06:55.449 { 00:06:55.449 "trtype": "TCP" 00:06:55.449 } 00:06:55.449 ] 00:06:55.449 }, 00:06:55.449 { 00:06:55.449 "name": "nvmf_tgt_poll_group_003", 00:06:55.449 "admin_qpairs": 0, 00:06:55.449 "io_qpairs": 0, 00:06:55.449 "current_admin_qpairs": 0, 00:06:55.449 "current_io_qpairs": 0, 00:06:55.449 "pending_bdev_io": 0, 00:06:55.449 "completed_nvme_io": 0, 00:06:55.449 "transports": [ 00:06:55.449 { 00:06:55.449 "trtype": "TCP" 00:06:55.449 } 00:06:55.449 ] 00:06:55.449 } 00:06:55.449 ] 00:06:55.449 }' 00:06:55.449 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:55.449 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:55.449 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:55.449 00:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.449 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.707 Malloc1 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.707 [2024-07-16 00:10:47.100935] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:55.707 [2024-07-16 00:10:47.123433] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:55.707 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:55.707 could not add new controller: failed to write to nvme-fabrics device 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.707 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.273 00:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:56.273 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:56.273 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:56.273 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:56.273 00:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:58.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:58.800 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:58.801 00:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.801 [2024-07-16 00:10:49.984017] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:58.801 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:58.801 could not add new controller: failed to write to nvme-fabrics device 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.801 00:10:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.369 00:10:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.369 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:59.369 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.369 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:59.369 00:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:01.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.273 [2024-07-16 00:10:52.822419] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.273 00:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.847 00:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:01.847 00:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:01.847 00:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:01.847 00:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:01.847 00:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:04.372 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:04.372 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:04.372 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:04.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.373 [2024-07-16 00:10:55.548717] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.373 00:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.631 00:10:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:04.631 00:10:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:04.631 00:10:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:04.631 00:10:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:04.631 00:10:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.204 [2024-07-16 00:10:58.384900] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.204 00:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.771 00:10:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.771 00:10:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:07.771 00:10:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.771 00:10:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:07.771 00:10:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:09.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.671 [2024-07-16 00:11:01.206180] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.671 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.236 00:11:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.236 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:10.236 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.236 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:10.236 00:11:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.761 [2024-07-16 00:11:03.976978] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.761 00:11:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.325 00:11:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.325 00:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:13.325 00:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.325 00:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:13.325 00:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.227 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.228 [2024-07-16 00:11:06.799380] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.228 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 [2024-07-16 00:11:06.847410] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 [2024-07-16 00:11:06.895567] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 [2024-07-16 00:11:06.943728] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 [2024-07-16 00:11:06.991911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.486 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.486 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.486 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.486 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:15.487 "tick_rate": 2700000000, 00:07:15.487 "poll_groups": [ 00:07:15.487 { 00:07:15.487 "name": "nvmf_tgt_poll_group_000", 00:07:15.487 "admin_qpairs": 2, 00:07:15.487 "io_qpairs": 84, 00:07:15.487 "current_admin_qpairs": 0, 00:07:15.487 "current_io_qpairs": 0, 00:07:15.487 "pending_bdev_io": 0, 00:07:15.487 "completed_nvme_io": 114, 00:07:15.487 "transports": [ 00:07:15.487 { 00:07:15.487 "trtype": "TCP" 00:07:15.487 } 00:07:15.487 ] 00:07:15.487 }, 00:07:15.487 { 00:07:15.487 "name": "nvmf_tgt_poll_group_001", 00:07:15.487 "admin_qpairs": 2, 00:07:15.487 "io_qpairs": 84, 00:07:15.487 "current_admin_qpairs": 0, 00:07:15.487 "current_io_qpairs": 0, 00:07:15.487 "pending_bdev_io": 0, 00:07:15.487 "completed_nvme_io": 285, 00:07:15.487 "transports": [ 00:07:15.487 { 00:07:15.487 "trtype": "TCP" 00:07:15.487 } 00:07:15.487 ] 00:07:15.487 }, 00:07:15.487 { 00:07:15.487 "name": "nvmf_tgt_poll_group_002", 00:07:15.487 "admin_qpairs": 1, 00:07:15.487 "io_qpairs": 84, 00:07:15.487 "current_admin_qpairs": 0, 00:07:15.487 "current_io_qpairs": 0, 00:07:15.487 "pending_bdev_io": 0, 00:07:15.487 "completed_nvme_io": 202, 00:07:15.487 "transports": [ 00:07:15.487 { 00:07:15.487 "trtype": "TCP" 00:07:15.487 } 00:07:15.487 ] 00:07:15.487 }, 00:07:15.487 { 00:07:15.487 "name": "nvmf_tgt_poll_group_003", 00:07:15.487 "admin_qpairs": 2, 00:07:15.487 "io_qpairs": 84, 00:07:15.487 "current_admin_qpairs": 0, 00:07:15.487 "current_io_qpairs": 0, 00:07:15.487 "pending_bdev_io": 0, 00:07:15.487 "completed_nvme_io": 85, 00:07:15.487 "transports": [ 00:07:15.487 { 00:07:15.487 "trtype": "TCP" 00:07:15.487 } 00:07:15.487 ] 00:07:15.487 } 00:07:15.487 ] 00:07:15.487 }' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:15.487 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.744 rmmod nvme_tcp 00:07:15.744 rmmod nvme_fabrics 00:07:15.744 rmmod nvme_keyring 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2195696 ']' 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2195696 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2195696 ']' 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2195696 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2195696 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2195696' 00:07:15.744 killing process with pid 2195696 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2195696 00:07:15.744 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2195696 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.003 00:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.541 00:11:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:18.541 00:07:18.541 real 0m26.064s 00:07:18.541 user 1m25.005s 00:07:18.541 sys 0m4.179s 00:07:18.541 00:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.541 00:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.541 ************************************ 00:07:18.541 END TEST nvmf_rpc 00:07:18.541 ************************************ 00:07:18.541 00:11:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:18.541 00:11:09 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:18.541 00:11:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.541 00:11:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.541 00:11:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.541 ************************************ 00:07:18.541 START TEST nvmf_invalid 00:07:18.541 ************************************ 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:18.541 * Looking for test storage... 00:07:18.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.541 00:11:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:20.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:20.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.445 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:20.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:20.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:07:20.446 00:07:20.446 --- 10.0.0.2 ping statistics --- 00:07:20.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.446 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:07:20.446 00:07:20.446 --- 10.0.0.1 ping statistics --- 00:07:20.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.446 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2200197 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2200197 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2200197 ']' 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.446 00:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:20.446 [2024-07-16 00:11:11.863706] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:07:20.446 [2024-07-16 00:11:11.863782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.446 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.446 [2024-07-16 00:11:11.933343] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.446 [2024-07-16 00:11:12.057454] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.446 [2024-07-16 00:11:12.057511] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.446 [2024-07-16 00:11:12.057528] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.447 [2024-07-16 00:11:12.057541] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.447 [2024-07-16 00:11:12.057553] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.447 [2024-07-16 00:11:12.057643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.447 [2024-07-16 00:11:12.057701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.705 [2024-07-16 00:11:12.057763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.705 [2024-07-16 00:11:12.057765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:21.269 00:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30446 00:07:21.527 [2024-07-16 00:11:13.066588] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:21.527 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:21.527 { 00:07:21.527 "nqn": "nqn.2016-06.io.spdk:cnode30446", 00:07:21.527 "tgt_name": "foobar", 00:07:21.527 "method": "nvmf_create_subsystem", 00:07:21.527 "req_id": 1 00:07:21.527 } 00:07:21.527 Got JSON-RPC error response 00:07:21.527 response: 00:07:21.527 { 00:07:21.527 "code": -32603, 00:07:21.527 "message": "Unable to find target foobar" 00:07:21.527 }' 00:07:21.527 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:21.527 { 00:07:21.527 "nqn": "nqn.2016-06.io.spdk:cnode30446", 00:07:21.527 "tgt_name": "foobar", 00:07:21.527 "method": "nvmf_create_subsystem", 00:07:21.527 "req_id": 1 00:07:21.527 } 00:07:21.527 Got JSON-RPC error response 00:07:21.527 response: 00:07:21.527 { 00:07:21.527 "code": -32603, 00:07:21.527 "message": "Unable to find target foobar" 00:07:21.527 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:21.527 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:21.527 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9654 00:07:21.785 [2024-07-16 00:11:13.339510] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9654: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:21.785 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:21.785 { 00:07:21.785 "nqn": "nqn.2016-06.io.spdk:cnode9654", 00:07:21.785 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:21.785 "method": "nvmf_create_subsystem", 00:07:21.785 "req_id": 1 00:07:21.785 } 00:07:21.785 Got JSON-RPC error response 00:07:21.785 response: 00:07:21.785 { 00:07:21.785 "code": -32602, 00:07:21.785 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:21.785 }' 00:07:21.785 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:21.785 { 00:07:21.785 "nqn": "nqn.2016-06.io.spdk:cnode9654", 00:07:21.785 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:21.785 "method": "nvmf_create_subsystem", 00:07:21.785 "req_id": 1 00:07:21.785 } 00:07:21.785 Got JSON-RPC error response 00:07:21.785 response: 00:07:21.785 { 00:07:21.785 "code": -32602, 00:07:21.785 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:21.785 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:21.785 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:21.785 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2626 00:07:22.041 [2024-07-16 00:11:13.588293] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2626: invalid model number 'SPDK_Controller' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:22.041 { 00:07:22.041 "nqn": "nqn.2016-06.io.spdk:cnode2626", 00:07:22.041 "model_number": "SPDK_Controller\u001f", 00:07:22.041 "method": "nvmf_create_subsystem", 00:07:22.041 "req_id": 1 00:07:22.041 } 00:07:22.041 Got JSON-RPC error response 00:07:22.041 response: 00:07:22.041 { 00:07:22.041 "code": -32602, 00:07:22.041 "message": "Invalid MN SPDK_Controller\u001f" 00:07:22.041 }' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:22.041 { 00:07:22.041 "nqn": "nqn.2016-06.io.spdk:cnode2626", 00:07:22.041 "model_number": "SPDK_Controller\u001f", 00:07:22.041 "method": "nvmf_create_subsystem", 00:07:22.041 "req_id": 1 00:07:22.041 } 00:07:22.041 Got JSON-RPC error response 00:07:22.041 response: 00:07:22.041 { 00:07:22.041 "code": -32602, 00:07:22.041 "message": "Invalid MN SPDK_Controller\u001f" 00:07:22.041 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W,O'\''rJk2WIJ`>JJ9dEph"' 00:07:22.297 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'W,O'\''rJk2WIJ`>JJ9dEph"' nqn.2016-06.io.spdk:cnode67 00:07:22.297 [2024-07-16 00:11:13.909327] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode67: invalid serial number 'W,O'rJk2WIJ`>JJ9dEph"' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:22.555 { 00:07:22.555 "nqn": "nqn.2016-06.io.spdk:cnode67", 00:07:22.555 "serial_number": "W,O'\''rJk2WIJ`>JJ9dEph\"", 00:07:22.555 "method": "nvmf_create_subsystem", 00:07:22.555 "req_id": 1 00:07:22.555 } 00:07:22.555 Got JSON-RPC error response 00:07:22.555 response: 00:07:22.555 { 00:07:22.555 "code": -32602, 00:07:22.555 "message": "Invalid SN W,O'\''rJk2WIJ`>JJ9dEph\"" 00:07:22.555 }' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:22.555 { 00:07:22.555 "nqn": "nqn.2016-06.io.spdk:cnode67", 00:07:22.555 "serial_number": "W,O'rJk2WIJ`>JJ9dEph\"", 00:07:22.555 "method": "nvmf_create_subsystem", 00:07:22.555 "req_id": 1 00:07:22.555 } 00:07:22.555 Got JSON-RPC error response 00:07:22.555 response: 00:07:22.555 { 00:07:22.555 "code": -32602, 00:07:22.555 "message": "Invalid SN W,O'rJk2WIJ`>JJ9dEph\"" 00:07:22.555 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:22.555 00:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:22.555 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ';wU>w;FzCjYBOcYQ*z.MaL/|#4Z^3LmC&!^X*Rx}' 00:07:22.556 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ';wU>w;FzCjYBOcYQ*z.MaL/|#4Z^3LmC&!^X*Rx}' nqn.2016-06.io.spdk:cnode23943 00:07:22.814 [2024-07-16 00:11:14.318665] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23943: invalid model number ';wU>w;FzCjYBOcYQ*z.MaL/|#4Z^3LmC&!^X*Rx}' 00:07:22.814 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:22.814 { 00:07:22.814 "nqn": "nqn.2016-06.io.spdk:cnode23943", 00:07:22.814 "model_number": ";wU>w;FzCjYBOcYQ*z.M\u007faL/|#4Z^3LmC&!^X*Rx}", 00:07:22.814 "method": "nvmf_create_subsystem", 00:07:22.814 "req_id": 1 00:07:22.814 } 00:07:22.814 Got JSON-RPC error response 00:07:22.814 response: 00:07:22.814 { 00:07:22.814 "code": -32602, 00:07:22.814 "message": "Invalid MN ;wU>w;FzCjYBOcYQ*z.M\u007faL/|#4Z^3LmC&!^X*Rx}" 00:07:22.814 }' 00:07:22.814 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:22.814 { 00:07:22.814 "nqn": "nqn.2016-06.io.spdk:cnode23943", 00:07:22.814 "model_number": ";wU>w;FzCjYBOcYQ*z.M\u007faL/|#4Z^3LmC&!^X*Rx}", 00:07:22.814 "method": "nvmf_create_subsystem", 00:07:22.814 "req_id": 1 00:07:22.814 } 00:07:22.814 Got JSON-RPC error response 00:07:22.814 response: 00:07:22.814 { 00:07:22.814 "code": -32602, 00:07:22.814 "message": "Invalid MN ;wU>w;FzCjYBOcYQ*z.M\u007faL/|#4Z^3LmC&!^X*Rx}" 00:07:22.814 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:22.814 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:23.072 [2024-07-16 00:11:14.563612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.072 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:23.372 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:23.372 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:23.372 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:23.372 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:23.372 00:11:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:23.630 [2024-07-16 00:11:15.073287] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:23.630 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:23.630 { 00:07:23.630 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:23.630 "listen_address": { 00:07:23.630 "trtype": "tcp", 00:07:23.630 "traddr": "", 00:07:23.630 "trsvcid": "4421" 00:07:23.630 }, 00:07:23.630 "method": "nvmf_subsystem_remove_listener", 00:07:23.630 "req_id": 1 00:07:23.630 } 00:07:23.630 Got JSON-RPC error response 00:07:23.630 response: 00:07:23.630 { 00:07:23.630 "code": -32602, 00:07:23.630 "message": "Invalid parameters" 00:07:23.630 }' 00:07:23.630 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:23.630 { 00:07:23.630 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:23.630 "listen_address": { 00:07:23.630 "trtype": "tcp", 00:07:23.630 "traddr": "", 00:07:23.630 "trsvcid": "4421" 00:07:23.630 }, 00:07:23.630 "method": "nvmf_subsystem_remove_listener", 00:07:23.630 "req_id": 1 00:07:23.630 } 00:07:23.631 Got JSON-RPC error response 00:07:23.631 response: 00:07:23.631 { 00:07:23.631 "code": -32602, 00:07:23.631 "message": "Invalid parameters" 00:07:23.631 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:23.631 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6530 -i 0 00:07:23.888 [2024-07-16 00:11:15.338123] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6530: invalid cntlid range [0-65519] 00:07:23.888 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:23.888 { 00:07:23.888 "nqn": "nqn.2016-06.io.spdk:cnode6530", 00:07:23.888 "min_cntlid": 0, 00:07:23.889 "method": "nvmf_create_subsystem", 00:07:23.889 "req_id": 1 00:07:23.889 } 00:07:23.889 Got JSON-RPC error response 00:07:23.889 response: 00:07:23.889 { 00:07:23.889 "code": -32602, 00:07:23.889 "message": "Invalid cntlid range [0-65519]" 00:07:23.889 }' 00:07:23.889 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:23.889 { 00:07:23.889 "nqn": "nqn.2016-06.io.spdk:cnode6530", 00:07:23.889 "min_cntlid": 0, 00:07:23.889 "method": "nvmf_create_subsystem", 00:07:23.889 "req_id": 1 00:07:23.889 } 00:07:23.889 Got JSON-RPC error response 00:07:23.889 response: 00:07:23.889 { 00:07:23.889 "code": -32602, 00:07:23.889 "message": "Invalid cntlid range [0-65519]" 00:07:23.889 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:23.889 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16251 -i 65520 00:07:24.146 [2024-07-16 00:11:15.590976] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16251: invalid cntlid range [65520-65519] 00:07:24.146 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:24.146 { 00:07:24.146 "nqn": "nqn.2016-06.io.spdk:cnode16251", 00:07:24.146 "min_cntlid": 65520, 00:07:24.146 "method": "nvmf_create_subsystem", 00:07:24.146 "req_id": 1 00:07:24.146 } 00:07:24.146 Got JSON-RPC error response 00:07:24.146 response: 00:07:24.146 { 00:07:24.146 "code": -32602, 00:07:24.146 "message": "Invalid cntlid range [65520-65519]" 00:07:24.146 }' 00:07:24.146 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:24.146 { 00:07:24.146 "nqn": "nqn.2016-06.io.spdk:cnode16251", 00:07:24.146 "min_cntlid": 65520, 00:07:24.146 "method": "nvmf_create_subsystem", 00:07:24.146 "req_id": 1 00:07:24.146 } 00:07:24.146 Got JSON-RPC error response 00:07:24.146 response: 00:07:24.146 { 00:07:24.146 "code": -32602, 00:07:24.146 "message": "Invalid cntlid range [65520-65519]" 00:07:24.146 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.146 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8404 -I 0 00:07:24.405 [2024-07-16 00:11:15.835810] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8404: invalid cntlid range [1-0] 00:07:24.405 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:24.405 { 00:07:24.405 "nqn": "nqn.2016-06.io.spdk:cnode8404", 00:07:24.405 "max_cntlid": 0, 00:07:24.405 "method": "nvmf_create_subsystem", 00:07:24.405 "req_id": 1 00:07:24.405 } 00:07:24.405 Got JSON-RPC error response 00:07:24.405 response: 00:07:24.405 { 00:07:24.405 "code": -32602, 00:07:24.405 "message": "Invalid cntlid range [1-0]" 00:07:24.405 }' 00:07:24.405 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:24.405 { 00:07:24.405 "nqn": "nqn.2016-06.io.spdk:cnode8404", 00:07:24.405 "max_cntlid": 0, 00:07:24.405 "method": "nvmf_create_subsystem", 00:07:24.405 "req_id": 1 00:07:24.405 } 00:07:24.405 Got JSON-RPC error response 00:07:24.405 response: 00:07:24.405 { 00:07:24.405 "code": -32602, 00:07:24.405 "message": "Invalid cntlid range [1-0]" 00:07:24.405 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.405 00:11:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13127 -I 65520 00:07:24.662 [2024-07-16 00:11:16.084670] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13127: invalid cntlid range [1-65520] 00:07:24.662 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:24.662 { 00:07:24.662 "nqn": "nqn.2016-06.io.spdk:cnode13127", 00:07:24.662 "max_cntlid": 65520, 00:07:24.662 "method": "nvmf_create_subsystem", 00:07:24.662 "req_id": 1 00:07:24.662 } 00:07:24.662 Got JSON-RPC error response 00:07:24.662 response: 00:07:24.662 { 00:07:24.662 "code": -32602, 00:07:24.662 "message": "Invalid cntlid range [1-65520]" 00:07:24.662 }' 00:07:24.662 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:24.662 { 00:07:24.662 "nqn": "nqn.2016-06.io.spdk:cnode13127", 00:07:24.662 "max_cntlid": 65520, 00:07:24.662 "method": "nvmf_create_subsystem", 00:07:24.662 "req_id": 1 00:07:24.662 } 00:07:24.662 Got JSON-RPC error response 00:07:24.662 response: 00:07:24.662 { 00:07:24.662 "code": -32602, 00:07:24.662 "message": "Invalid cntlid range [1-65520]" 00:07:24.662 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.662 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7611 -i 6 -I 5 00:07:24.919 [2024-07-16 00:11:16.329503] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7611: invalid cntlid range [6-5] 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:24.919 { 00:07:24.919 "nqn": "nqn.2016-06.io.spdk:cnode7611", 00:07:24.919 "min_cntlid": 6, 00:07:24.919 "max_cntlid": 5, 00:07:24.919 "method": "nvmf_create_subsystem", 00:07:24.919 "req_id": 1 00:07:24.919 } 00:07:24.919 Got JSON-RPC error response 00:07:24.919 response: 00:07:24.919 { 00:07:24.919 "code": -32602, 00:07:24.919 "message": "Invalid cntlid range [6-5]" 00:07:24.919 }' 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:24.919 { 00:07:24.919 "nqn": "nqn.2016-06.io.spdk:cnode7611", 00:07:24.919 "min_cntlid": 6, 00:07:24.919 "max_cntlid": 5, 00:07:24.919 "method": "nvmf_create_subsystem", 00:07:24.919 "req_id": 1 00:07:24.919 } 00:07:24.919 Got JSON-RPC error response 00:07:24.919 response: 00:07:24.919 { 00:07:24.919 "code": -32602, 00:07:24.919 "message": "Invalid cntlid range [6-5]" 00:07:24.919 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:24.919 { 00:07:24.919 "name": "foobar", 00:07:24.919 "method": "nvmf_delete_target", 00:07:24.919 "req_id": 1 00:07:24.919 } 00:07:24.919 Got JSON-RPC error response 00:07:24.919 response: 00:07:24.919 { 00:07:24.919 "code": -32602, 00:07:24.919 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:24.919 }' 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:24.919 { 00:07:24.919 "name": "foobar", 00:07:24.919 "method": "nvmf_delete_target", 00:07:24.919 "req_id": 1 00:07:24.919 } 00:07:24.919 Got JSON-RPC error response 00:07:24.919 response: 00:07:24.919 { 00:07:24.919 "code": -32602, 00:07:24.919 "message": "The specified target doesn't exist, cannot delete it." 00:07:24.919 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:24.919 rmmod nvme_tcp 00:07:24.919 rmmod nvme_fabrics 00:07:24.919 rmmod nvme_keyring 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2200197 ']' 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2200197 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2200197 ']' 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2200197 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.919 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2200197 00:07:25.176 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.176 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.176 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2200197' 00:07:25.176 killing process with pid 2200197 00:07:25.176 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2200197 00:07:25.176 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2200197 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.434 00:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.337 00:11:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:27.337 00:07:27.337 real 0m9.265s 00:07:27.337 user 0m22.768s 00:07:27.337 sys 0m2.420s 00:07:27.337 00:11:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.337 00:11:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:27.337 ************************************ 00:07:27.337 END TEST nvmf_invalid 00:07:27.337 ************************************ 00:07:27.337 00:11:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:27.337 00:11:18 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.337 00:11:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.337 00:11:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.337 00:11:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.337 ************************************ 00:07:27.337 START TEST nvmf_abort 00:07:27.337 ************************************ 00:07:27.337 00:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.595 * Looking for test storage... 00:07:27.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.595 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:27.596 00:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:29.494 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.494 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:29.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:29.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:29.495 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:07:29.495 00:07:29.495 --- 10.0.0.2 ping statistics --- 00:07:29.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.495 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:29.495 00:07:29.495 --- 10.0.0.1 ping statistics --- 00:07:29.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.495 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.495 00:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2202954 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2202954 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2202954 ']' 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.495 00:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.495 [2024-07-16 00:11:21.067512] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:07:29.495 [2024-07-16 00:11:21.067591] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.495 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.753 [2024-07-16 00:11:21.138993] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.753 [2024-07-16 00:11:21.261581] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.753 [2024-07-16 00:11:21.261643] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.753 [2024-07-16 00:11:21.261660] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.753 [2024-07-16 00:11:21.261674] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.753 [2024-07-16 00:11:21.261685] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.753 [2024-07-16 00:11:21.261783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.753 [2024-07-16 00:11:21.261839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.753 [2024-07-16 00:11:21.261842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.686 [2024-07-16 00:11:22.078954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.686 Malloc0 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.686 Delay0 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:30.686 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.687 [2024-07-16 00:11:22.155890] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.687 00:11:22 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:30.687 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.687 [2024-07-16 00:11:22.221436] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:33.219 Initializing NVMe Controllers 00:07:33.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:33.220 controller IO queue size 128 less than required 00:07:33.220 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:33.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:33.220 Initialization complete. Launching workers. 00:07:33.220 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33385 00:07:33.220 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33446, failed to submit 62 00:07:33.220 success 33389, unsuccess 57, failed 0 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.220 rmmod nvme_tcp 00:07:33.220 rmmod nvme_fabrics 00:07:33.220 rmmod nvme_keyring 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2202954 ']' 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2202954 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2202954 ']' 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2202954 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2202954 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2202954' 00:07:33.220 killing process with pid 2202954 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2202954 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2202954 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.220 00:11:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.160 00:11:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.160 00:07:35.160 real 0m7.785s 00:07:35.160 user 0m12.163s 00:07:35.160 sys 0m2.568s 00:07:35.160 00:11:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.160 00:11:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.160 ************************************ 00:07:35.160 END TEST nvmf_abort 00:07:35.160 ************************************ 00:07:35.160 00:11:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.160 00:11:26 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.160 00:11:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.160 00:11:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.160 00:11:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.160 ************************************ 00:07:35.160 START TEST nvmf_ns_hotplug_stress 00:07:35.160 ************************************ 00:07:35.160 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.420 * Looking for test storage... 00:07:35.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.420 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.421 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.421 00:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.324 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.324 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:37.325 00:07:37.325 --- 10.0.0.2 ping statistics --- 00:07:37.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.325 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:07:37.325 00:07:37.325 --- 10.0.0.1 ping statistics --- 00:07:37.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.325 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2205310 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2205310 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2205310 ']' 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.325 00:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.584 [2024-07-16 00:11:28.966497] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:07:37.584 [2024-07-16 00:11:28.966579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.584 [2024-07-16 00:11:29.032021] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.584 [2024-07-16 00:11:29.152047] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.584 [2024-07-16 00:11:29.152111] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.584 [2024-07-16 00:11:29.152128] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.584 [2024-07-16 00:11:29.152141] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.584 [2024-07-16 00:11:29.152152] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.584 [2024-07-16 00:11:29.152238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.584 [2024-07-16 00:11:29.152293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.584 [2024-07-16 00:11:29.152297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:38.518 00:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.775 [2024-07-16 00:11:30.244649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.775 00:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:39.031 00:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.323 [2024-07-16 00:11:30.739489] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.323 00:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.581 00:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:39.839 Malloc0 00:07:39.839 00:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:40.096 Delay0 00:07:40.096 00:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.353 00:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:40.613 NULL1 00:07:40.613 00:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:40.870 00:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2205742 00:07:40.870 00:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:40.870 00:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:40.870 00:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.240 Read completed with error (sct=0, sc=11) 00:07:42.240 00:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.240 00:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:42.240 00:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:42.498 true 00:07:42.498 00:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:42.498 00:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.430 00:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.687 00:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:43.687 00:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:43.944 true 00:07:43.944 00:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:43.944 00:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.201 00:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.458 00:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:44.458 00:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:44.458 true 00:07:44.458 00:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:44.458 00:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.716 00:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.973 00:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:44.973 00:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:45.230 true 00:07:45.230 00:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:45.230 00:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 00:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.601 00:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:46.601 00:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:46.859 true 00:07:46.859 00:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:46.859 00:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.790 00:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.048 00:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:48.048 00:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:48.304 true 00:07:48.304 00:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:48.304 00:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.563 00:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.821 00:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:48.821 00:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:49.079 true 00:07:49.079 00:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:49.079 00:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.644 00:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.902 00:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:49.902 00:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:50.158 true 00:07:50.158 00:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:50.158 00:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.416 00:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.673 00:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:50.673 00:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:50.930 true 00:07:50.930 00:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:50.930 00:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.869 00:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.125 00:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:52.125 00:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:52.381 true 00:07:52.381 00:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:52.381 00:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.638 00:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.896 00:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:52.896 00:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:53.189 true 00:07:53.189 00:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:53.189 00:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.446 00:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.703 00:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:53.703 00:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:53.960 true 00:07:53.960 00:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:53.960 00:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.892 00:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.150 00:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:55.150 00:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:55.408 true 00:07:55.408 00:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:55.408 00:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.665 00:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.923 00:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:55.923 00:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:55.923 true 00:07:56.180 00:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:56.180 00:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.114 00:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.114 00:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:57.114 00:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:57.371 true 00:07:57.371 00:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:57.371 00:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.628 00:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.886 00:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:57.886 00:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:58.143 true 00:07:58.143 00:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:58.143 00:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.418 00:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.676 00:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:58.676 00:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:58.933 true 00:07:58.933 00:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:07:58.933 00:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.306 00:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.306 00:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:00.306 00:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:00.564 true 00:08:00.564 00:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:00.564 00:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.497 00:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.497 00:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:01.497 00:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:01.756 true 00:08:01.756 00:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:01.756 00:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.015 00:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.272 00:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:02.272 00:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:02.530 true 00:08:02.530 00:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:02.530 00:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.462 00:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.720 00:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:03.720 00:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:03.977 true 00:08:03.977 00:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:03.977 00:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.236 00:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.493 00:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:04.493 00:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:04.751 true 00:08:04.751 00:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:04.751 00:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.008 00:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.266 00:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:05.266 00:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:05.266 true 00:08:05.266 00:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:05.266 00:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.638 00:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.895 00:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:06.895 00:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:07.184 true 00:08:07.184 00:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:07.184 00:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.442 00:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.699 00:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:07.699 00:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:07.957 true 00:08:07.957 00:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:07.957 00:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.214 00:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.472 00:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:08.472 00:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:08.472 true 00:08:08.730 00:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:08.730 00:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.664 00:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.922 00:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:09.922 00:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:10.179 true 00:08:10.179 00:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:10.179 00:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.436 00:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.694 00:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:10.694 00:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:10.953 true 00:08:10.953 00:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:10.953 00:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.885 00:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.886 Initializing NVMe Controllers 00:08:11.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.886 Controller IO queue size 128, less than required. 00:08:11.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.886 Controller IO queue size 128, less than required. 00:08:11.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:11.886 Initialization complete. Launching workers. 00:08:11.886 ======================================================== 00:08:11.886 Latency(us) 00:08:11.886 Device Information : IOPS MiB/s Average min max 00:08:11.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1019.53 0.50 66168.47 2405.29 1084510.03 00:08:11.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10468.12 5.11 12228.00 2692.31 450017.12 00:08:11.886 ======================================================== 00:08:11.886 Total : 11487.65 5.61 17015.23 2405.29 1084510.03 00:08:11.886 00:08:12.142 00:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:12.142 00:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:12.399 true 00:08:12.399 00:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2205742 00:08:12.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2205742) - No such process 00:08:12.399 00:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2205742 00:08:12.399 00:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.656 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.913 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:12.913 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:12.913 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:12.913 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.913 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:13.169 null0 00:08:13.169 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.169 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.169 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:13.425 null1 00:08:13.425 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.425 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.425 00:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:13.682 null2 00:08:13.682 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.682 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.682 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:13.937 null3 00:08:13.937 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.937 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.937 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:14.193 null4 00:08:14.193 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.193 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.193 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:14.449 null5 00:08:14.449 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.449 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.449 00:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:14.760 null6 00:08:14.760 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.760 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.760 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:15.018 null7 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.018 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2209918 2209919 2209921 2209923 2209925 2209927 2209929 2209931 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.019 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.276 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.532 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.532 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.533 00:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.790 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.047 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.306 00:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.564 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.822 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.080 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.338 00:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.596 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.854 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.855 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.855 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.855 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.855 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.855 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.855 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.855 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.112 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.113 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.370 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.370 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.370 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.370 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.370 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.370 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.370 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.628 00:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.628 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.628 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.628 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.886 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.143 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.401 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.402 00:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.695 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.954 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.212 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.212 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.212 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.213 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.213 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.213 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.213 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.213 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.470 rmmod nvme_tcp 00:08:20.470 rmmod nvme_fabrics 00:08:20.470 rmmod nvme_keyring 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2205310 ']' 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2205310 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2205310 ']' 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2205310 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.470 00:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2205310 00:08:20.470 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:20.470 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:20.470 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2205310' 00:08:20.470 killing process with pid 2205310 00:08:20.470 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2205310 00:08:20.470 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2205310 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.728 00:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.264 00:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.264 00:08:23.264 real 0m47.590s 00:08:23.264 user 3m35.950s 00:08:23.264 sys 0m16.359s 00:08:23.264 00:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.264 00:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.264 ************************************ 00:08:23.264 END TEST nvmf_ns_hotplug_stress 00:08:23.264 ************************************ 00:08:23.264 00:12:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:23.264 00:12:14 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:23.264 00:12:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.264 00:12:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.264 00:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.264 ************************************ 00:08:23.264 START TEST nvmf_connect_stress 00:08:23.264 ************************************ 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:23.264 * Looking for test storage... 00:08:23.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.264 00:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:08:25.167 00:08:25.167 --- 10.0.0.2 ping statistics --- 00:08:25.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.167 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:25.167 00:08:25.167 --- 10.0.0.1 ping statistics --- 00:08:25.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.167 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2213189 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2213189 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2213189 ']' 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.167 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.168 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.168 00:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.168 [2024-07-16 00:12:16.671970] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:08:25.168 [2024-07-16 00:12:16.672045] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.168 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.168 [2024-07-16 00:12:16.735709] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.426 [2024-07-16 00:12:16.846013] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.426 [2024-07-16 00:12:16.846071] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.426 [2024-07-16 00:12:16.846087] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.426 [2024-07-16 00:12:16.846100] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.426 [2024-07-16 00:12:16.846111] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.426 [2024-07-16 00:12:16.846197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.426 [2024-07-16 00:12:16.846314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.426 [2024-07-16 00:12:16.846318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.991 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.991 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:25.991 00:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.991 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.991 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.249 [2024-07-16 00:12:17.630281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.249 [2024-07-16 00:12:17.664001] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.249 NULL1 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2213345 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.249 00:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.507 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.507 00:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:26.507 00:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.507 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.507 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.764 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.764 00:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:26.764 00:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.764 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.764 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.329 00:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:27.329 00:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.329 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.329 00:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.587 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.587 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:27.587 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.587 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.587 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.845 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.845 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:27.845 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.845 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.845 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.102 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.102 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:28.102 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.102 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.102 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.360 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.360 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:28.360 00:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.360 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.360 00:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.925 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.925 00:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:28.925 00:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.925 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.925 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.183 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.183 00:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:29.183 00:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.183 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.183 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.440 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.440 00:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:29.440 00:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.440 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.440 00:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.698 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.698 00:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:29.698 00:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.698 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.698 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.262 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.262 00:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:30.262 00:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.262 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.262 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.520 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.520 00:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:30.520 00:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.520 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.520 00:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.777 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.777 00:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:30.777 00:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.777 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.777 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.035 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.035 00:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:31.035 00:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.035 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.035 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.293 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.293 00:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:31.293 00:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.293 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.293 00:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.857 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.857 00:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:31.857 00:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.857 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.857 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.114 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.114 00:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:32.114 00:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.114 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.114 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.371 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.371 00:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:32.371 00:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.371 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.371 00:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.628 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.628 00:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:32.628 00:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.628 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.628 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.886 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.886 00:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:32.886 00:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.886 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.886 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.448 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.449 00:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:33.449 00:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.449 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.449 00:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.706 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.706 00:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:33.706 00:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.706 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.706 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.991 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.991 00:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:33.991 00:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.991 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.991 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.271 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.271 00:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:34.271 00:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.271 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.271 00:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.528 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.528 00:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:34.528 00:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.528 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.528 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.784 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.784 00:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:34.784 00:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.784 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.784 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.346 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.346 00:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:35.346 00:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.346 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.346 00:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.602 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.602 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:35.602 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.602 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.602 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.859 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.859 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:35.859 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.859 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.859 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.116 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.116 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:36.116 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.116 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.116 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:36.631 00:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.631 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2213345 00:08:36.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2213345) - No such process 00:08:36.631 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2213345 00:08:36.631 00:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.631 rmmod nvme_tcp 00:08:36.631 rmmod nvme_fabrics 00:08:36.631 rmmod nvme_keyring 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2213189 ']' 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2213189 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2213189 ']' 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2213189 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213189 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213189' 00:08:36.631 killing process with pid 2213189 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2213189 00:08:36.631 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2213189 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.889 00:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.421 00:12:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.421 00:08:39.421 real 0m16.022s 00:08:39.421 user 0m40.358s 00:08:39.421 sys 0m6.065s 00:08:39.421 00:12:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.421 00:12:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.421 ************************************ 00:08:39.421 END TEST nvmf_connect_stress 00:08:39.421 ************************************ 00:08:39.421 00:12:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:39.421 00:12:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:39.421 00:12:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.421 00:12:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.421 00:12:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.421 ************************************ 00:08:39.421 START TEST nvmf_fused_ordering 00:08:39.421 ************************************ 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:39.421 * Looking for test storage... 00:08:39.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.421 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.422 00:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:41.326 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:41.326 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:41.326 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:41.326 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:08:41.326 00:08:41.326 --- 10.0.0.2 ping statistics --- 00:08:41.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.326 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:41.326 00:08:41.326 --- 10.0.0.1 ping statistics --- 00:08:41.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.326 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.326 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2216496 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2216496 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2216496 ']' 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.327 00:12:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.327 [2024-07-16 00:12:32.756341] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:08:41.327 [2024-07-16 00:12:32.756425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.327 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.327 [2024-07-16 00:12:32.824562] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.586 [2024-07-16 00:12:32.942781] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.586 [2024-07-16 00:12:32.942851] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.586 [2024-07-16 00:12:32.942874] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.586 [2024-07-16 00:12:32.942896] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.586 [2024-07-16 00:12:32.942908] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.586 [2024-07-16 00:12:32.942940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.586 [2024-07-16 00:12:33.092855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.586 [2024-07-16 00:12:33.109071] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.586 NULL1 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.586 00:12:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:41.586 [2024-07-16 00:12:33.156131] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:08:41.586 [2024-07-16 00:12:33.156172] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216641 ] 00:08:41.586 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.517 Attached to nqn.2016-06.io.spdk:cnode1 00:08:42.517 Namespace ID: 1 size: 1GB 00:08:42.517 fused_ordering(0) 00:08:42.517 fused_ordering(1) 00:08:42.517 fused_ordering(2) 00:08:42.517 fused_ordering(3) 00:08:42.517 fused_ordering(4) 00:08:42.517 fused_ordering(5) 00:08:42.517 fused_ordering(6) 00:08:42.517 fused_ordering(7) 00:08:42.517 fused_ordering(8) 00:08:42.517 fused_ordering(9) 00:08:42.517 fused_ordering(10) 00:08:42.517 fused_ordering(11) 00:08:42.517 fused_ordering(12) 00:08:42.517 fused_ordering(13) 00:08:42.517 fused_ordering(14) 00:08:42.517 fused_ordering(15) 00:08:42.517 fused_ordering(16) 00:08:42.517 fused_ordering(17) 00:08:42.517 fused_ordering(18) 00:08:42.517 fused_ordering(19) 00:08:42.517 fused_ordering(20) 00:08:42.517 fused_ordering(21) 00:08:42.517 fused_ordering(22) 00:08:42.517 fused_ordering(23) 00:08:42.517 fused_ordering(24) 00:08:42.517 fused_ordering(25) 00:08:42.517 fused_ordering(26) 00:08:42.517 fused_ordering(27) 00:08:42.517 fused_ordering(28) 00:08:42.517 fused_ordering(29) 00:08:42.517 fused_ordering(30) 00:08:42.517 fused_ordering(31) 00:08:42.517 fused_ordering(32) 00:08:42.517 fused_ordering(33) 00:08:42.517 fused_ordering(34) 00:08:42.517 fused_ordering(35) 00:08:42.517 fused_ordering(36) 00:08:42.517 fused_ordering(37) 00:08:42.517 fused_ordering(38) 00:08:42.517 fused_ordering(39) 00:08:42.517 fused_ordering(40) 00:08:42.517 fused_ordering(41) 00:08:42.517 fused_ordering(42) 00:08:42.517 fused_ordering(43) 00:08:42.517 fused_ordering(44) 00:08:42.517 fused_ordering(45) 00:08:42.517 fused_ordering(46) 00:08:42.517 fused_ordering(47) 00:08:42.517 fused_ordering(48) 00:08:42.517 fused_ordering(49) 00:08:42.517 fused_ordering(50) 00:08:42.517 fused_ordering(51) 00:08:42.517 fused_ordering(52) 00:08:42.517 fused_ordering(53) 00:08:42.517 fused_ordering(54) 00:08:42.517 fused_ordering(55) 00:08:42.517 fused_ordering(56) 00:08:42.517 fused_ordering(57) 00:08:42.517 fused_ordering(58) 00:08:42.517 fused_ordering(59) 00:08:42.517 fused_ordering(60) 00:08:42.517 fused_ordering(61) 00:08:42.517 fused_ordering(62) 00:08:42.517 fused_ordering(63) 00:08:42.517 fused_ordering(64) 00:08:42.517 fused_ordering(65) 00:08:42.517 fused_ordering(66) 00:08:42.517 fused_ordering(67) 00:08:42.517 fused_ordering(68) 00:08:42.517 fused_ordering(69) 00:08:42.517 fused_ordering(70) 00:08:42.517 fused_ordering(71) 00:08:42.517 fused_ordering(72) 00:08:42.517 fused_ordering(73) 00:08:42.517 fused_ordering(74) 00:08:42.517 fused_ordering(75) 00:08:42.517 fused_ordering(76) 00:08:42.517 fused_ordering(77) 00:08:42.517 fused_ordering(78) 00:08:42.517 fused_ordering(79) 00:08:42.517 fused_ordering(80) 00:08:42.517 fused_ordering(81) 00:08:42.517 fused_ordering(82) 00:08:42.517 fused_ordering(83) 00:08:42.517 fused_ordering(84) 00:08:42.517 fused_ordering(85) 00:08:42.517 fused_ordering(86) 00:08:42.517 fused_ordering(87) 00:08:42.517 fused_ordering(88) 00:08:42.517 fused_ordering(89) 00:08:42.517 fused_ordering(90) 00:08:42.517 fused_ordering(91) 00:08:42.517 fused_ordering(92) 00:08:42.517 fused_ordering(93) 00:08:42.517 fused_ordering(94) 00:08:42.517 fused_ordering(95) 00:08:42.517 fused_ordering(96) 00:08:42.517 fused_ordering(97) 00:08:42.517 fused_ordering(98) 00:08:42.517 fused_ordering(99) 00:08:42.517 fused_ordering(100) 00:08:42.517 fused_ordering(101) 00:08:42.517 fused_ordering(102) 00:08:42.517 fused_ordering(103) 00:08:42.517 fused_ordering(104) 00:08:42.517 fused_ordering(105) 00:08:42.517 fused_ordering(106) 00:08:42.517 fused_ordering(107) 00:08:42.517 fused_ordering(108) 00:08:42.517 fused_ordering(109) 00:08:42.517 fused_ordering(110) 00:08:42.517 fused_ordering(111) 00:08:42.517 fused_ordering(112) 00:08:42.517 fused_ordering(113) 00:08:42.517 fused_ordering(114) 00:08:42.517 fused_ordering(115) 00:08:42.517 fused_ordering(116) 00:08:42.517 fused_ordering(117) 00:08:42.517 fused_ordering(118) 00:08:42.517 fused_ordering(119) 00:08:42.517 fused_ordering(120) 00:08:42.517 fused_ordering(121) 00:08:42.517 fused_ordering(122) 00:08:42.517 fused_ordering(123) 00:08:42.517 fused_ordering(124) 00:08:42.517 fused_ordering(125) 00:08:42.517 fused_ordering(126) 00:08:42.517 fused_ordering(127) 00:08:42.517 fused_ordering(128) 00:08:42.517 fused_ordering(129) 00:08:42.517 fused_ordering(130) 00:08:42.517 fused_ordering(131) 00:08:42.517 fused_ordering(132) 00:08:42.517 fused_ordering(133) 00:08:42.517 fused_ordering(134) 00:08:42.517 fused_ordering(135) 00:08:42.517 fused_ordering(136) 00:08:42.517 fused_ordering(137) 00:08:42.517 fused_ordering(138) 00:08:42.517 fused_ordering(139) 00:08:42.517 fused_ordering(140) 00:08:42.517 fused_ordering(141) 00:08:42.517 fused_ordering(142) 00:08:42.517 fused_ordering(143) 00:08:42.517 fused_ordering(144) 00:08:42.517 fused_ordering(145) 00:08:42.517 fused_ordering(146) 00:08:42.517 fused_ordering(147) 00:08:42.517 fused_ordering(148) 00:08:42.517 fused_ordering(149) 00:08:42.517 fused_ordering(150) 00:08:42.517 fused_ordering(151) 00:08:42.517 fused_ordering(152) 00:08:42.517 fused_ordering(153) 00:08:42.517 fused_ordering(154) 00:08:42.517 fused_ordering(155) 00:08:42.517 fused_ordering(156) 00:08:42.517 fused_ordering(157) 00:08:42.517 fused_ordering(158) 00:08:42.517 fused_ordering(159) 00:08:42.517 fused_ordering(160) 00:08:42.517 fused_ordering(161) 00:08:42.517 fused_ordering(162) 00:08:42.517 fused_ordering(163) 00:08:42.517 fused_ordering(164) 00:08:42.517 fused_ordering(165) 00:08:42.517 fused_ordering(166) 00:08:42.517 fused_ordering(167) 00:08:42.517 fused_ordering(168) 00:08:42.517 fused_ordering(169) 00:08:42.517 fused_ordering(170) 00:08:42.517 fused_ordering(171) 00:08:42.517 fused_ordering(172) 00:08:42.517 fused_ordering(173) 00:08:42.517 fused_ordering(174) 00:08:42.517 fused_ordering(175) 00:08:42.517 fused_ordering(176) 00:08:42.517 fused_ordering(177) 00:08:42.517 fused_ordering(178) 00:08:42.517 fused_ordering(179) 00:08:42.517 fused_ordering(180) 00:08:42.517 fused_ordering(181) 00:08:42.517 fused_ordering(182) 00:08:42.517 fused_ordering(183) 00:08:42.517 fused_ordering(184) 00:08:42.517 fused_ordering(185) 00:08:42.517 fused_ordering(186) 00:08:42.517 fused_ordering(187) 00:08:42.517 fused_ordering(188) 00:08:42.517 fused_ordering(189) 00:08:42.517 fused_ordering(190) 00:08:42.517 fused_ordering(191) 00:08:42.517 fused_ordering(192) 00:08:42.517 fused_ordering(193) 00:08:42.517 fused_ordering(194) 00:08:42.517 fused_ordering(195) 00:08:42.517 fused_ordering(196) 00:08:42.517 fused_ordering(197) 00:08:42.517 fused_ordering(198) 00:08:42.517 fused_ordering(199) 00:08:42.517 fused_ordering(200) 00:08:42.517 fused_ordering(201) 00:08:42.517 fused_ordering(202) 00:08:42.517 fused_ordering(203) 00:08:42.517 fused_ordering(204) 00:08:42.517 fused_ordering(205) 00:08:43.082 fused_ordering(206) 00:08:43.082 fused_ordering(207) 00:08:43.082 fused_ordering(208) 00:08:43.082 fused_ordering(209) 00:08:43.082 fused_ordering(210) 00:08:43.082 fused_ordering(211) 00:08:43.082 fused_ordering(212) 00:08:43.082 fused_ordering(213) 00:08:43.082 fused_ordering(214) 00:08:43.082 fused_ordering(215) 00:08:43.082 fused_ordering(216) 00:08:43.082 fused_ordering(217) 00:08:43.082 fused_ordering(218) 00:08:43.082 fused_ordering(219) 00:08:43.082 fused_ordering(220) 00:08:43.083 fused_ordering(221) 00:08:43.083 fused_ordering(222) 00:08:43.083 fused_ordering(223) 00:08:43.083 fused_ordering(224) 00:08:43.083 fused_ordering(225) 00:08:43.083 fused_ordering(226) 00:08:43.083 fused_ordering(227) 00:08:43.083 fused_ordering(228) 00:08:43.083 fused_ordering(229) 00:08:43.083 fused_ordering(230) 00:08:43.083 fused_ordering(231) 00:08:43.083 fused_ordering(232) 00:08:43.083 fused_ordering(233) 00:08:43.083 fused_ordering(234) 00:08:43.083 fused_ordering(235) 00:08:43.083 fused_ordering(236) 00:08:43.083 fused_ordering(237) 00:08:43.083 fused_ordering(238) 00:08:43.083 fused_ordering(239) 00:08:43.083 fused_ordering(240) 00:08:43.083 fused_ordering(241) 00:08:43.083 fused_ordering(242) 00:08:43.083 fused_ordering(243) 00:08:43.083 fused_ordering(244) 00:08:43.083 fused_ordering(245) 00:08:43.083 fused_ordering(246) 00:08:43.083 fused_ordering(247) 00:08:43.083 fused_ordering(248) 00:08:43.083 fused_ordering(249) 00:08:43.083 fused_ordering(250) 00:08:43.083 fused_ordering(251) 00:08:43.083 fused_ordering(252) 00:08:43.083 fused_ordering(253) 00:08:43.083 fused_ordering(254) 00:08:43.083 fused_ordering(255) 00:08:43.083 fused_ordering(256) 00:08:43.083 fused_ordering(257) 00:08:43.083 fused_ordering(258) 00:08:43.083 fused_ordering(259) 00:08:43.083 fused_ordering(260) 00:08:43.083 fused_ordering(261) 00:08:43.083 fused_ordering(262) 00:08:43.083 fused_ordering(263) 00:08:43.083 fused_ordering(264) 00:08:43.083 fused_ordering(265) 00:08:43.083 fused_ordering(266) 00:08:43.083 fused_ordering(267) 00:08:43.083 fused_ordering(268) 00:08:43.083 fused_ordering(269) 00:08:43.083 fused_ordering(270) 00:08:43.083 fused_ordering(271) 00:08:43.083 fused_ordering(272) 00:08:43.083 fused_ordering(273) 00:08:43.083 fused_ordering(274) 00:08:43.083 fused_ordering(275) 00:08:43.083 fused_ordering(276) 00:08:43.083 fused_ordering(277) 00:08:43.083 fused_ordering(278) 00:08:43.083 fused_ordering(279) 00:08:43.083 fused_ordering(280) 00:08:43.083 fused_ordering(281) 00:08:43.083 fused_ordering(282) 00:08:43.083 fused_ordering(283) 00:08:43.083 fused_ordering(284) 00:08:43.083 fused_ordering(285) 00:08:43.083 fused_ordering(286) 00:08:43.083 fused_ordering(287) 00:08:43.083 fused_ordering(288) 00:08:43.083 fused_ordering(289) 00:08:43.083 fused_ordering(290) 00:08:43.083 fused_ordering(291) 00:08:43.083 fused_ordering(292) 00:08:43.083 fused_ordering(293) 00:08:43.083 fused_ordering(294) 00:08:43.083 fused_ordering(295) 00:08:43.083 fused_ordering(296) 00:08:43.083 fused_ordering(297) 00:08:43.083 fused_ordering(298) 00:08:43.083 fused_ordering(299) 00:08:43.083 fused_ordering(300) 00:08:43.083 fused_ordering(301) 00:08:43.083 fused_ordering(302) 00:08:43.083 fused_ordering(303) 00:08:43.083 fused_ordering(304) 00:08:43.083 fused_ordering(305) 00:08:43.083 fused_ordering(306) 00:08:43.083 fused_ordering(307) 00:08:43.083 fused_ordering(308) 00:08:43.083 fused_ordering(309) 00:08:43.083 fused_ordering(310) 00:08:43.083 fused_ordering(311) 00:08:43.083 fused_ordering(312) 00:08:43.083 fused_ordering(313) 00:08:43.083 fused_ordering(314) 00:08:43.083 fused_ordering(315) 00:08:43.083 fused_ordering(316) 00:08:43.083 fused_ordering(317) 00:08:43.083 fused_ordering(318) 00:08:43.083 fused_ordering(319) 00:08:43.083 fused_ordering(320) 00:08:43.083 fused_ordering(321) 00:08:43.083 fused_ordering(322) 00:08:43.083 fused_ordering(323) 00:08:43.083 fused_ordering(324) 00:08:43.083 fused_ordering(325) 00:08:43.083 fused_ordering(326) 00:08:43.083 fused_ordering(327) 00:08:43.083 fused_ordering(328) 00:08:43.083 fused_ordering(329) 00:08:43.083 fused_ordering(330) 00:08:43.083 fused_ordering(331) 00:08:43.083 fused_ordering(332) 00:08:43.083 fused_ordering(333) 00:08:43.083 fused_ordering(334) 00:08:43.083 fused_ordering(335) 00:08:43.083 fused_ordering(336) 00:08:43.083 fused_ordering(337) 00:08:43.083 fused_ordering(338) 00:08:43.083 fused_ordering(339) 00:08:43.083 fused_ordering(340) 00:08:43.083 fused_ordering(341) 00:08:43.083 fused_ordering(342) 00:08:43.083 fused_ordering(343) 00:08:43.083 fused_ordering(344) 00:08:43.083 fused_ordering(345) 00:08:43.083 fused_ordering(346) 00:08:43.083 fused_ordering(347) 00:08:43.083 fused_ordering(348) 00:08:43.083 fused_ordering(349) 00:08:43.083 fused_ordering(350) 00:08:43.083 fused_ordering(351) 00:08:43.083 fused_ordering(352) 00:08:43.083 fused_ordering(353) 00:08:43.083 fused_ordering(354) 00:08:43.083 fused_ordering(355) 00:08:43.083 fused_ordering(356) 00:08:43.083 fused_ordering(357) 00:08:43.083 fused_ordering(358) 00:08:43.083 fused_ordering(359) 00:08:43.083 fused_ordering(360) 00:08:43.083 fused_ordering(361) 00:08:43.083 fused_ordering(362) 00:08:43.083 fused_ordering(363) 00:08:43.083 fused_ordering(364) 00:08:43.083 fused_ordering(365) 00:08:43.083 fused_ordering(366) 00:08:43.083 fused_ordering(367) 00:08:43.083 fused_ordering(368) 00:08:43.083 fused_ordering(369) 00:08:43.083 fused_ordering(370) 00:08:43.083 fused_ordering(371) 00:08:43.083 fused_ordering(372) 00:08:43.083 fused_ordering(373) 00:08:43.083 fused_ordering(374) 00:08:43.083 fused_ordering(375) 00:08:43.083 fused_ordering(376) 00:08:43.083 fused_ordering(377) 00:08:43.083 fused_ordering(378) 00:08:43.083 fused_ordering(379) 00:08:43.083 fused_ordering(380) 00:08:43.083 fused_ordering(381) 00:08:43.083 fused_ordering(382) 00:08:43.083 fused_ordering(383) 00:08:43.083 fused_ordering(384) 00:08:43.083 fused_ordering(385) 00:08:43.083 fused_ordering(386) 00:08:43.083 fused_ordering(387) 00:08:43.083 fused_ordering(388) 00:08:43.083 fused_ordering(389) 00:08:43.083 fused_ordering(390) 00:08:43.083 fused_ordering(391) 00:08:43.083 fused_ordering(392) 00:08:43.083 fused_ordering(393) 00:08:43.083 fused_ordering(394) 00:08:43.083 fused_ordering(395) 00:08:43.083 fused_ordering(396) 00:08:43.083 fused_ordering(397) 00:08:43.083 fused_ordering(398) 00:08:43.083 fused_ordering(399) 00:08:43.083 fused_ordering(400) 00:08:43.083 fused_ordering(401) 00:08:43.083 fused_ordering(402) 00:08:43.083 fused_ordering(403) 00:08:43.083 fused_ordering(404) 00:08:43.083 fused_ordering(405) 00:08:43.083 fused_ordering(406) 00:08:43.083 fused_ordering(407) 00:08:43.083 fused_ordering(408) 00:08:43.083 fused_ordering(409) 00:08:43.083 fused_ordering(410) 00:08:43.648 fused_ordering(411) 00:08:43.648 fused_ordering(412) 00:08:43.648 fused_ordering(413) 00:08:43.648 fused_ordering(414) 00:08:43.648 fused_ordering(415) 00:08:43.648 fused_ordering(416) 00:08:43.648 fused_ordering(417) 00:08:43.648 fused_ordering(418) 00:08:43.648 fused_ordering(419) 00:08:43.648 fused_ordering(420) 00:08:43.648 fused_ordering(421) 00:08:43.648 fused_ordering(422) 00:08:43.648 fused_ordering(423) 00:08:43.648 fused_ordering(424) 00:08:43.648 fused_ordering(425) 00:08:43.648 fused_ordering(426) 00:08:43.648 fused_ordering(427) 00:08:43.648 fused_ordering(428) 00:08:43.648 fused_ordering(429) 00:08:43.648 fused_ordering(430) 00:08:43.648 fused_ordering(431) 00:08:43.648 fused_ordering(432) 00:08:43.648 fused_ordering(433) 00:08:43.648 fused_ordering(434) 00:08:43.648 fused_ordering(435) 00:08:43.648 fused_ordering(436) 00:08:43.648 fused_ordering(437) 00:08:43.648 fused_ordering(438) 00:08:43.648 fused_ordering(439) 00:08:43.648 fused_ordering(440) 00:08:43.648 fused_ordering(441) 00:08:43.648 fused_ordering(442) 00:08:43.648 fused_ordering(443) 00:08:43.648 fused_ordering(444) 00:08:43.648 fused_ordering(445) 00:08:43.648 fused_ordering(446) 00:08:43.648 fused_ordering(447) 00:08:43.648 fused_ordering(448) 00:08:43.648 fused_ordering(449) 00:08:43.648 fused_ordering(450) 00:08:43.648 fused_ordering(451) 00:08:43.648 fused_ordering(452) 00:08:43.648 fused_ordering(453) 00:08:43.648 fused_ordering(454) 00:08:43.648 fused_ordering(455) 00:08:43.648 fused_ordering(456) 00:08:43.648 fused_ordering(457) 00:08:43.648 fused_ordering(458) 00:08:43.648 fused_ordering(459) 00:08:43.648 fused_ordering(460) 00:08:43.648 fused_ordering(461) 00:08:43.648 fused_ordering(462) 00:08:43.648 fused_ordering(463) 00:08:43.648 fused_ordering(464) 00:08:43.648 fused_ordering(465) 00:08:43.648 fused_ordering(466) 00:08:43.648 fused_ordering(467) 00:08:43.648 fused_ordering(468) 00:08:43.648 fused_ordering(469) 00:08:43.648 fused_ordering(470) 00:08:43.648 fused_ordering(471) 00:08:43.648 fused_ordering(472) 00:08:43.648 fused_ordering(473) 00:08:43.648 fused_ordering(474) 00:08:43.648 fused_ordering(475) 00:08:43.648 fused_ordering(476) 00:08:43.648 fused_ordering(477) 00:08:43.648 fused_ordering(478) 00:08:43.648 fused_ordering(479) 00:08:43.648 fused_ordering(480) 00:08:43.648 fused_ordering(481) 00:08:43.648 fused_ordering(482) 00:08:43.648 fused_ordering(483) 00:08:43.648 fused_ordering(484) 00:08:43.648 fused_ordering(485) 00:08:43.648 fused_ordering(486) 00:08:43.648 fused_ordering(487) 00:08:43.648 fused_ordering(488) 00:08:43.648 fused_ordering(489) 00:08:43.648 fused_ordering(490) 00:08:43.648 fused_ordering(491) 00:08:43.648 fused_ordering(492) 00:08:43.648 fused_ordering(493) 00:08:43.648 fused_ordering(494) 00:08:43.648 fused_ordering(495) 00:08:43.648 fused_ordering(496) 00:08:43.648 fused_ordering(497) 00:08:43.648 fused_ordering(498) 00:08:43.648 fused_ordering(499) 00:08:43.648 fused_ordering(500) 00:08:43.648 fused_ordering(501) 00:08:43.648 fused_ordering(502) 00:08:43.648 fused_ordering(503) 00:08:43.648 fused_ordering(504) 00:08:43.648 fused_ordering(505) 00:08:43.648 fused_ordering(506) 00:08:43.648 fused_ordering(507) 00:08:43.648 fused_ordering(508) 00:08:43.648 fused_ordering(509) 00:08:43.648 fused_ordering(510) 00:08:43.648 fused_ordering(511) 00:08:43.648 fused_ordering(512) 00:08:43.648 fused_ordering(513) 00:08:43.648 fused_ordering(514) 00:08:43.648 fused_ordering(515) 00:08:43.648 fused_ordering(516) 00:08:43.648 fused_ordering(517) 00:08:43.648 fused_ordering(518) 00:08:43.648 fused_ordering(519) 00:08:43.648 fused_ordering(520) 00:08:43.648 fused_ordering(521) 00:08:43.648 fused_ordering(522) 00:08:43.648 fused_ordering(523) 00:08:43.648 fused_ordering(524) 00:08:43.648 fused_ordering(525) 00:08:43.648 fused_ordering(526) 00:08:43.648 fused_ordering(527) 00:08:43.648 fused_ordering(528) 00:08:43.648 fused_ordering(529) 00:08:43.648 fused_ordering(530) 00:08:43.648 fused_ordering(531) 00:08:43.648 fused_ordering(532) 00:08:43.649 fused_ordering(533) 00:08:43.649 fused_ordering(534) 00:08:43.649 fused_ordering(535) 00:08:43.649 fused_ordering(536) 00:08:43.649 fused_ordering(537) 00:08:43.649 fused_ordering(538) 00:08:43.649 fused_ordering(539) 00:08:43.649 fused_ordering(540) 00:08:43.649 fused_ordering(541) 00:08:43.649 fused_ordering(542) 00:08:43.649 fused_ordering(543) 00:08:43.649 fused_ordering(544) 00:08:43.649 fused_ordering(545) 00:08:43.649 fused_ordering(546) 00:08:43.649 fused_ordering(547) 00:08:43.649 fused_ordering(548) 00:08:43.649 fused_ordering(549) 00:08:43.649 fused_ordering(550) 00:08:43.649 fused_ordering(551) 00:08:43.649 fused_ordering(552) 00:08:43.649 fused_ordering(553) 00:08:43.649 fused_ordering(554) 00:08:43.649 fused_ordering(555) 00:08:43.649 fused_ordering(556) 00:08:43.649 fused_ordering(557) 00:08:43.649 fused_ordering(558) 00:08:43.649 fused_ordering(559) 00:08:43.649 fused_ordering(560) 00:08:43.649 fused_ordering(561) 00:08:43.649 fused_ordering(562) 00:08:43.649 fused_ordering(563) 00:08:43.649 fused_ordering(564) 00:08:43.649 fused_ordering(565) 00:08:43.649 fused_ordering(566) 00:08:43.649 fused_ordering(567) 00:08:43.649 fused_ordering(568) 00:08:43.649 fused_ordering(569) 00:08:43.649 fused_ordering(570) 00:08:43.649 fused_ordering(571) 00:08:43.649 fused_ordering(572) 00:08:43.649 fused_ordering(573) 00:08:43.649 fused_ordering(574) 00:08:43.649 fused_ordering(575) 00:08:43.649 fused_ordering(576) 00:08:43.649 fused_ordering(577) 00:08:43.649 fused_ordering(578) 00:08:43.649 fused_ordering(579) 00:08:43.649 fused_ordering(580) 00:08:43.649 fused_ordering(581) 00:08:43.649 fused_ordering(582) 00:08:43.649 fused_ordering(583) 00:08:43.649 fused_ordering(584) 00:08:43.649 fused_ordering(585) 00:08:43.649 fused_ordering(586) 00:08:43.649 fused_ordering(587) 00:08:43.649 fused_ordering(588) 00:08:43.649 fused_ordering(589) 00:08:43.649 fused_ordering(590) 00:08:43.649 fused_ordering(591) 00:08:43.649 fused_ordering(592) 00:08:43.649 fused_ordering(593) 00:08:43.649 fused_ordering(594) 00:08:43.649 fused_ordering(595) 00:08:43.649 fused_ordering(596) 00:08:43.649 fused_ordering(597) 00:08:43.649 fused_ordering(598) 00:08:43.649 fused_ordering(599) 00:08:43.649 fused_ordering(600) 00:08:43.649 fused_ordering(601) 00:08:43.649 fused_ordering(602) 00:08:43.649 fused_ordering(603) 00:08:43.649 fused_ordering(604) 00:08:43.649 fused_ordering(605) 00:08:43.649 fused_ordering(606) 00:08:43.649 fused_ordering(607) 00:08:43.649 fused_ordering(608) 00:08:43.649 fused_ordering(609) 00:08:43.649 fused_ordering(610) 00:08:43.649 fused_ordering(611) 00:08:43.649 fused_ordering(612) 00:08:43.649 fused_ordering(613) 00:08:43.649 fused_ordering(614) 00:08:43.649 fused_ordering(615) 00:08:44.213 fused_ordering(616) 00:08:44.213 fused_ordering(617) 00:08:44.213 fused_ordering(618) 00:08:44.213 fused_ordering(619) 00:08:44.213 fused_ordering(620) 00:08:44.213 fused_ordering(621) 00:08:44.213 fused_ordering(622) 00:08:44.213 fused_ordering(623) 00:08:44.213 fused_ordering(624) 00:08:44.213 fused_ordering(625) 00:08:44.213 fused_ordering(626) 00:08:44.213 fused_ordering(627) 00:08:44.213 fused_ordering(628) 00:08:44.213 fused_ordering(629) 00:08:44.213 fused_ordering(630) 00:08:44.213 fused_ordering(631) 00:08:44.213 fused_ordering(632) 00:08:44.213 fused_ordering(633) 00:08:44.213 fused_ordering(634) 00:08:44.213 fused_ordering(635) 00:08:44.213 fused_ordering(636) 00:08:44.213 fused_ordering(637) 00:08:44.213 fused_ordering(638) 00:08:44.213 fused_ordering(639) 00:08:44.213 fused_ordering(640) 00:08:44.213 fused_ordering(641) 00:08:44.213 fused_ordering(642) 00:08:44.213 fused_ordering(643) 00:08:44.213 fused_ordering(644) 00:08:44.213 fused_ordering(645) 00:08:44.213 fused_ordering(646) 00:08:44.213 fused_ordering(647) 00:08:44.213 fused_ordering(648) 00:08:44.213 fused_ordering(649) 00:08:44.213 fused_ordering(650) 00:08:44.213 fused_ordering(651) 00:08:44.213 fused_ordering(652) 00:08:44.213 fused_ordering(653) 00:08:44.213 fused_ordering(654) 00:08:44.213 fused_ordering(655) 00:08:44.213 fused_ordering(656) 00:08:44.213 fused_ordering(657) 00:08:44.213 fused_ordering(658) 00:08:44.213 fused_ordering(659) 00:08:44.213 fused_ordering(660) 00:08:44.213 fused_ordering(661) 00:08:44.213 fused_ordering(662) 00:08:44.213 fused_ordering(663) 00:08:44.213 fused_ordering(664) 00:08:44.213 fused_ordering(665) 00:08:44.213 fused_ordering(666) 00:08:44.213 fused_ordering(667) 00:08:44.213 fused_ordering(668) 00:08:44.213 fused_ordering(669) 00:08:44.213 fused_ordering(670) 00:08:44.213 fused_ordering(671) 00:08:44.213 fused_ordering(672) 00:08:44.213 fused_ordering(673) 00:08:44.213 fused_ordering(674) 00:08:44.213 fused_ordering(675) 00:08:44.213 fused_ordering(676) 00:08:44.213 fused_ordering(677) 00:08:44.213 fused_ordering(678) 00:08:44.213 fused_ordering(679) 00:08:44.213 fused_ordering(680) 00:08:44.213 fused_ordering(681) 00:08:44.213 fused_ordering(682) 00:08:44.213 fused_ordering(683) 00:08:44.213 fused_ordering(684) 00:08:44.213 fused_ordering(685) 00:08:44.213 fused_ordering(686) 00:08:44.213 fused_ordering(687) 00:08:44.213 fused_ordering(688) 00:08:44.213 fused_ordering(689) 00:08:44.213 fused_ordering(690) 00:08:44.213 fused_ordering(691) 00:08:44.213 fused_ordering(692) 00:08:44.213 fused_ordering(693) 00:08:44.213 fused_ordering(694) 00:08:44.213 fused_ordering(695) 00:08:44.213 fused_ordering(696) 00:08:44.213 fused_ordering(697) 00:08:44.213 fused_ordering(698) 00:08:44.213 fused_ordering(699) 00:08:44.213 fused_ordering(700) 00:08:44.213 fused_ordering(701) 00:08:44.213 fused_ordering(702) 00:08:44.213 fused_ordering(703) 00:08:44.213 fused_ordering(704) 00:08:44.213 fused_ordering(705) 00:08:44.213 fused_ordering(706) 00:08:44.213 fused_ordering(707) 00:08:44.213 fused_ordering(708) 00:08:44.213 fused_ordering(709) 00:08:44.213 fused_ordering(710) 00:08:44.213 fused_ordering(711) 00:08:44.213 fused_ordering(712) 00:08:44.213 fused_ordering(713) 00:08:44.213 fused_ordering(714) 00:08:44.213 fused_ordering(715) 00:08:44.213 fused_ordering(716) 00:08:44.213 fused_ordering(717) 00:08:44.213 fused_ordering(718) 00:08:44.213 fused_ordering(719) 00:08:44.213 fused_ordering(720) 00:08:44.213 fused_ordering(721) 00:08:44.213 fused_ordering(722) 00:08:44.213 fused_ordering(723) 00:08:44.213 fused_ordering(724) 00:08:44.213 fused_ordering(725) 00:08:44.213 fused_ordering(726) 00:08:44.213 fused_ordering(727) 00:08:44.213 fused_ordering(728) 00:08:44.213 fused_ordering(729) 00:08:44.213 fused_ordering(730) 00:08:44.213 fused_ordering(731) 00:08:44.213 fused_ordering(732) 00:08:44.213 fused_ordering(733) 00:08:44.213 fused_ordering(734) 00:08:44.213 fused_ordering(735) 00:08:44.213 fused_ordering(736) 00:08:44.213 fused_ordering(737) 00:08:44.213 fused_ordering(738) 00:08:44.213 fused_ordering(739) 00:08:44.213 fused_ordering(740) 00:08:44.213 fused_ordering(741) 00:08:44.213 fused_ordering(742) 00:08:44.213 fused_ordering(743) 00:08:44.213 fused_ordering(744) 00:08:44.213 fused_ordering(745) 00:08:44.213 fused_ordering(746) 00:08:44.213 fused_ordering(747) 00:08:44.213 fused_ordering(748) 00:08:44.213 fused_ordering(749) 00:08:44.213 fused_ordering(750) 00:08:44.213 fused_ordering(751) 00:08:44.213 fused_ordering(752) 00:08:44.213 fused_ordering(753) 00:08:44.213 fused_ordering(754) 00:08:44.213 fused_ordering(755) 00:08:44.213 fused_ordering(756) 00:08:44.213 fused_ordering(757) 00:08:44.213 fused_ordering(758) 00:08:44.213 fused_ordering(759) 00:08:44.213 fused_ordering(760) 00:08:44.213 fused_ordering(761) 00:08:44.213 fused_ordering(762) 00:08:44.213 fused_ordering(763) 00:08:44.213 fused_ordering(764) 00:08:44.213 fused_ordering(765) 00:08:44.213 fused_ordering(766) 00:08:44.213 fused_ordering(767) 00:08:44.213 fused_ordering(768) 00:08:44.213 fused_ordering(769) 00:08:44.213 fused_ordering(770) 00:08:44.213 fused_ordering(771) 00:08:44.213 fused_ordering(772) 00:08:44.213 fused_ordering(773) 00:08:44.213 fused_ordering(774) 00:08:44.213 fused_ordering(775) 00:08:44.213 fused_ordering(776) 00:08:44.213 fused_ordering(777) 00:08:44.213 fused_ordering(778) 00:08:44.213 fused_ordering(779) 00:08:44.213 fused_ordering(780) 00:08:44.213 fused_ordering(781) 00:08:44.213 fused_ordering(782) 00:08:44.213 fused_ordering(783) 00:08:44.213 fused_ordering(784) 00:08:44.213 fused_ordering(785) 00:08:44.213 fused_ordering(786) 00:08:44.213 fused_ordering(787) 00:08:44.213 fused_ordering(788) 00:08:44.213 fused_ordering(789) 00:08:44.213 fused_ordering(790) 00:08:44.213 fused_ordering(791) 00:08:44.213 fused_ordering(792) 00:08:44.213 fused_ordering(793) 00:08:44.213 fused_ordering(794) 00:08:44.213 fused_ordering(795) 00:08:44.213 fused_ordering(796) 00:08:44.213 fused_ordering(797) 00:08:44.213 fused_ordering(798) 00:08:44.213 fused_ordering(799) 00:08:44.213 fused_ordering(800) 00:08:44.213 fused_ordering(801) 00:08:44.213 fused_ordering(802) 00:08:44.213 fused_ordering(803) 00:08:44.213 fused_ordering(804) 00:08:44.213 fused_ordering(805) 00:08:44.213 fused_ordering(806) 00:08:44.213 fused_ordering(807) 00:08:44.213 fused_ordering(808) 00:08:44.213 fused_ordering(809) 00:08:44.213 fused_ordering(810) 00:08:44.213 fused_ordering(811) 00:08:44.213 fused_ordering(812) 00:08:44.213 fused_ordering(813) 00:08:44.213 fused_ordering(814) 00:08:44.213 fused_ordering(815) 00:08:44.213 fused_ordering(816) 00:08:44.213 fused_ordering(817) 00:08:44.213 fused_ordering(818) 00:08:44.213 fused_ordering(819) 00:08:44.213 fused_ordering(820) 00:08:45.146 fused_ordering(821) 00:08:45.146 fused_ordering(822) 00:08:45.146 fused_ordering(823) 00:08:45.146 fused_ordering(824) 00:08:45.146 fused_ordering(825) 00:08:45.146 fused_ordering(826) 00:08:45.146 fused_ordering(827) 00:08:45.146 fused_ordering(828) 00:08:45.146 fused_ordering(829) 00:08:45.146 fused_ordering(830) 00:08:45.146 fused_ordering(831) 00:08:45.146 fused_ordering(832) 00:08:45.146 fused_ordering(833) 00:08:45.146 fused_ordering(834) 00:08:45.146 fused_ordering(835) 00:08:45.146 fused_ordering(836) 00:08:45.146 fused_ordering(837) 00:08:45.146 fused_ordering(838) 00:08:45.146 fused_ordering(839) 00:08:45.146 fused_ordering(840) 00:08:45.146 fused_ordering(841) 00:08:45.146 fused_ordering(842) 00:08:45.146 fused_ordering(843) 00:08:45.146 fused_ordering(844) 00:08:45.146 fused_ordering(845) 00:08:45.146 fused_ordering(846) 00:08:45.146 fused_ordering(847) 00:08:45.146 fused_ordering(848) 00:08:45.146 fused_ordering(849) 00:08:45.146 fused_ordering(850) 00:08:45.146 fused_ordering(851) 00:08:45.146 fused_ordering(852) 00:08:45.146 fused_ordering(853) 00:08:45.146 fused_ordering(854) 00:08:45.146 fused_ordering(855) 00:08:45.146 fused_ordering(856) 00:08:45.146 fused_ordering(857) 00:08:45.146 fused_ordering(858) 00:08:45.146 fused_ordering(859) 00:08:45.146 fused_ordering(860) 00:08:45.146 fused_ordering(861) 00:08:45.146 fused_ordering(862) 00:08:45.146 fused_ordering(863) 00:08:45.146 fused_ordering(864) 00:08:45.146 fused_ordering(865) 00:08:45.146 fused_ordering(866) 00:08:45.146 fused_ordering(867) 00:08:45.146 fused_ordering(868) 00:08:45.146 fused_ordering(869) 00:08:45.146 fused_ordering(870) 00:08:45.146 fused_ordering(871) 00:08:45.146 fused_ordering(872) 00:08:45.146 fused_ordering(873) 00:08:45.146 fused_ordering(874) 00:08:45.146 fused_ordering(875) 00:08:45.146 fused_ordering(876) 00:08:45.146 fused_ordering(877) 00:08:45.146 fused_ordering(878) 00:08:45.146 fused_ordering(879) 00:08:45.146 fused_ordering(880) 00:08:45.146 fused_ordering(881) 00:08:45.146 fused_ordering(882) 00:08:45.146 fused_ordering(883) 00:08:45.146 fused_ordering(884) 00:08:45.146 fused_ordering(885) 00:08:45.146 fused_ordering(886) 00:08:45.146 fused_ordering(887) 00:08:45.146 fused_ordering(888) 00:08:45.146 fused_ordering(889) 00:08:45.146 fused_ordering(890) 00:08:45.146 fused_ordering(891) 00:08:45.146 fused_ordering(892) 00:08:45.146 fused_ordering(893) 00:08:45.146 fused_ordering(894) 00:08:45.146 fused_ordering(895) 00:08:45.146 fused_ordering(896) 00:08:45.146 fused_ordering(897) 00:08:45.146 fused_ordering(898) 00:08:45.146 fused_ordering(899) 00:08:45.146 fused_ordering(900) 00:08:45.146 fused_ordering(901) 00:08:45.146 fused_ordering(902) 00:08:45.146 fused_ordering(903) 00:08:45.146 fused_ordering(904) 00:08:45.146 fused_ordering(905) 00:08:45.146 fused_ordering(906) 00:08:45.146 fused_ordering(907) 00:08:45.146 fused_ordering(908) 00:08:45.146 fused_ordering(909) 00:08:45.146 fused_ordering(910) 00:08:45.146 fused_ordering(911) 00:08:45.146 fused_ordering(912) 00:08:45.146 fused_ordering(913) 00:08:45.146 fused_ordering(914) 00:08:45.146 fused_ordering(915) 00:08:45.146 fused_ordering(916) 00:08:45.146 fused_ordering(917) 00:08:45.146 fused_ordering(918) 00:08:45.146 fused_ordering(919) 00:08:45.146 fused_ordering(920) 00:08:45.146 fused_ordering(921) 00:08:45.146 fused_ordering(922) 00:08:45.146 fused_ordering(923) 00:08:45.146 fused_ordering(924) 00:08:45.146 fused_ordering(925) 00:08:45.146 fused_ordering(926) 00:08:45.146 fused_ordering(927) 00:08:45.146 fused_ordering(928) 00:08:45.146 fused_ordering(929) 00:08:45.146 fused_ordering(930) 00:08:45.146 fused_ordering(931) 00:08:45.146 fused_ordering(932) 00:08:45.146 fused_ordering(933) 00:08:45.146 fused_ordering(934) 00:08:45.146 fused_ordering(935) 00:08:45.146 fused_ordering(936) 00:08:45.146 fused_ordering(937) 00:08:45.146 fused_ordering(938) 00:08:45.146 fused_ordering(939) 00:08:45.146 fused_ordering(940) 00:08:45.146 fused_ordering(941) 00:08:45.146 fused_ordering(942) 00:08:45.146 fused_ordering(943) 00:08:45.146 fused_ordering(944) 00:08:45.146 fused_ordering(945) 00:08:45.146 fused_ordering(946) 00:08:45.146 fused_ordering(947) 00:08:45.146 fused_ordering(948) 00:08:45.146 fused_ordering(949) 00:08:45.146 fused_ordering(950) 00:08:45.146 fused_ordering(951) 00:08:45.146 fused_ordering(952) 00:08:45.146 fused_ordering(953) 00:08:45.146 fused_ordering(954) 00:08:45.146 fused_ordering(955) 00:08:45.146 fused_ordering(956) 00:08:45.146 fused_ordering(957) 00:08:45.146 fused_ordering(958) 00:08:45.146 fused_ordering(959) 00:08:45.146 fused_ordering(960) 00:08:45.146 fused_ordering(961) 00:08:45.146 fused_ordering(962) 00:08:45.146 fused_ordering(963) 00:08:45.146 fused_ordering(964) 00:08:45.146 fused_ordering(965) 00:08:45.146 fused_ordering(966) 00:08:45.146 fused_ordering(967) 00:08:45.146 fused_ordering(968) 00:08:45.146 fused_ordering(969) 00:08:45.146 fused_ordering(970) 00:08:45.146 fused_ordering(971) 00:08:45.146 fused_ordering(972) 00:08:45.146 fused_ordering(973) 00:08:45.146 fused_ordering(974) 00:08:45.146 fused_ordering(975) 00:08:45.146 fused_ordering(976) 00:08:45.146 fused_ordering(977) 00:08:45.146 fused_ordering(978) 00:08:45.146 fused_ordering(979) 00:08:45.146 fused_ordering(980) 00:08:45.146 fused_ordering(981) 00:08:45.146 fused_ordering(982) 00:08:45.146 fused_ordering(983) 00:08:45.146 fused_ordering(984) 00:08:45.146 fused_ordering(985) 00:08:45.146 fused_ordering(986) 00:08:45.146 fused_ordering(987) 00:08:45.146 fused_ordering(988) 00:08:45.146 fused_ordering(989) 00:08:45.146 fused_ordering(990) 00:08:45.146 fused_ordering(991) 00:08:45.146 fused_ordering(992) 00:08:45.146 fused_ordering(993) 00:08:45.146 fused_ordering(994) 00:08:45.146 fused_ordering(995) 00:08:45.146 fused_ordering(996) 00:08:45.146 fused_ordering(997) 00:08:45.146 fused_ordering(998) 00:08:45.146 fused_ordering(999) 00:08:45.146 fused_ordering(1000) 00:08:45.146 fused_ordering(1001) 00:08:45.146 fused_ordering(1002) 00:08:45.147 fused_ordering(1003) 00:08:45.147 fused_ordering(1004) 00:08:45.147 fused_ordering(1005) 00:08:45.147 fused_ordering(1006) 00:08:45.147 fused_ordering(1007) 00:08:45.147 fused_ordering(1008) 00:08:45.147 fused_ordering(1009) 00:08:45.147 fused_ordering(1010) 00:08:45.147 fused_ordering(1011) 00:08:45.147 fused_ordering(1012) 00:08:45.147 fused_ordering(1013) 00:08:45.147 fused_ordering(1014) 00:08:45.147 fused_ordering(1015) 00:08:45.147 fused_ordering(1016) 00:08:45.147 fused_ordering(1017) 00:08:45.147 fused_ordering(1018) 00:08:45.147 fused_ordering(1019) 00:08:45.147 fused_ordering(1020) 00:08:45.147 fused_ordering(1021) 00:08:45.147 fused_ordering(1022) 00:08:45.147 fused_ordering(1023) 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.147 rmmod nvme_tcp 00:08:45.147 rmmod nvme_fabrics 00:08:45.147 rmmod nvme_keyring 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2216496 ']' 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2216496 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2216496 ']' 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2216496 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216496 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216496' 00:08:45.147 killing process with pid 2216496 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2216496 00:08:45.147 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2216496 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.405 00:12:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.935 00:12:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.935 00:08:47.935 real 0m8.567s 00:08:47.935 user 0m6.188s 00:08:47.935 sys 0m4.080s 00:08:47.935 00:12:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.935 00:12:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:47.935 ************************************ 00:08:47.935 END TEST nvmf_fused_ordering 00:08:47.935 ************************************ 00:08:47.935 00:12:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:47.935 00:12:39 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:47.935 00:12:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.935 00:12:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.935 00:12:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.935 ************************************ 00:08:47.935 START TEST nvmf_delete_subsystem 00:08:47.935 ************************************ 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:47.935 * Looking for test storage... 00:08:47.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.935 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.936 00:12:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.833 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:08:49.834 00:08:49.834 --- 10.0.0.2 ping statistics --- 00:08:49.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.834 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:08:49.834 00:08:49.834 --- 10.0.0.1 ping statistics --- 00:08:49.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.834 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2218971 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2218971 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2218971 ']' 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.834 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.834 [2024-07-16 00:12:41.366675] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:08:49.834 [2024-07-16 00:12:41.366756] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.834 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.834 [2024-07-16 00:12:41.429741] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.145 [2024-07-16 00:12:41.539784] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.145 [2024-07-16 00:12:41.539843] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.145 [2024-07-16 00:12:41.539872] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.145 [2024-07-16 00:12:41.539891] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.145 [2024-07-16 00:12:41.539901] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.145 [2024-07-16 00:12:41.539971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.145 [2024-07-16 00:12:41.539977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 [2024-07-16 00:12:41.690045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 [2024-07-16 00:12:41.706291] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 NULL1 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 Delay0 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2218992 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:50.145 00:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:50.431 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.431 [2024-07-16 00:12:41.780982] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:52.332 00:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.333 00:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.333 00:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 [2024-07-16 00:12:43.912282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbd390 is same with the state(5) to be set 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 starting I/O failed: -6 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 [2024-07-16 00:12:43.913055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe7ec000c00 is same with the state(5) to be set 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Write completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:52.333 Read completed with error (sct=0, sc=8) 00:08:53.707 [2024-07-16 00:12:44.883702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbea70 is same with the state(5) to be set 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 [2024-07-16 00:12:44.911733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe7ec00d6c0 is same with the state(5) to be set 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 [2024-07-16 00:12:44.911934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe7ec00d020 is same with the state(5) to be set 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 [2024-07-16 00:12:44.913198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbde40 is same with the state(5) to be set 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Write completed with error (sct=0, sc=8) 00:08:53.707 Read completed with error (sct=0, sc=8) 00:08:53.707 [2024-07-16 00:12:44.916222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbd7a0 is same with the state(5) to be set 00:08:53.707 Initializing NVMe Controllers 00:08:53.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.707 Controller IO queue size 128, less than required. 00:08:53.707 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:53.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:53.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:53.707 Initialization complete. Launching workers. 00:08:53.707 ======================================================== 00:08:53.707 Latency(us) 00:08:53.707 Device Information : IOPS MiB/s Average min max 00:08:53.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.21 0.09 881333.99 525.04 1011643.29 00:08:53.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 145.43 0.07 958747.09 330.60 1011412.61 00:08:53.707 ======================================================== 00:08:53.707 Total : 321.64 0.16 916337.14 330.60 1011643.29 00:08:53.707 00:08:53.707 [2024-07-16 00:12:44.916969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbea70 (9): Bad file descriptor 00:08:53.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:53.707 00:12:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.707 00:12:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:53.707 00:12:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2218992 00:08:53.707 00:12:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2218992 00:08:53.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2218992) - No such process 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2218992 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2218992 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2218992 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.965 [2024-07-16 00:12:45.436591] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2219453 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:53.965 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.965 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.965 [2024-07-16 00:12:45.502506] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:54.530 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.530 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:54.530 00:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.096 00:12:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.096 00:12:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:55.096 00:12:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.354 00:12:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.354 00:12:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:55.354 00:12:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.919 00:12:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.919 00:12:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:55.919 00:12:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.484 00:12:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.484 00:12:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:56.484 00:12:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.050 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.050 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:57.050 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.307 Initializing NVMe Controllers 00:08:57.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:57.307 Controller IO queue size 128, less than required. 00:08:57.307 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:57.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:57.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:57.307 Initialization complete. Launching workers. 00:08:57.307 ======================================================== 00:08:57.307 Latency(us) 00:08:57.307 Device Information : IOPS MiB/s Average min max 00:08:57.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004656.38 1000247.09 1042897.80 00:08:57.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004321.35 1000284.67 1012115.88 00:08:57.307 ======================================================== 00:08:57.307 Total : 256.00 0.12 1004488.86 1000247.09 1042897.80 00:08:57.307 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219453 00:08:57.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2219453) - No such process 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2219453 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.565 00:12:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.565 rmmod nvme_tcp 00:08:57.565 rmmod nvme_fabrics 00:08:57.565 rmmod nvme_keyring 00:08:57.565 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.565 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:57.565 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:57.565 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2218971 ']' 00:08:57.565 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2218971 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2218971 ']' 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2218971 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218971 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218971' 00:08:57.566 killing process with pid 2218971 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2218971 00:08:57.566 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2218971 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.825 00:12:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.358 00:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.358 00:09:00.358 real 0m12.290s 00:09:00.358 user 0m27.830s 00:09:00.358 sys 0m2.945s 00:09:00.358 00:12:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.358 00:12:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.358 ************************************ 00:09:00.358 END TEST nvmf_delete_subsystem 00:09:00.358 ************************************ 00:09:00.358 00:12:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:00.358 00:12:51 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:00.358 00:12:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.358 00:12:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.358 00:12:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.358 ************************************ 00:09:00.358 START TEST nvmf_ns_masking 00:09:00.358 ************************************ 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:00.358 * Looking for test storage... 00:09:00.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.358 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=31acc102-7c21-4b6f-bd96-755a8b1e6ea9 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=21d81287-1751-4591-8175-5172d68767e2 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=49d19568-a86c-4591-8ce3-883f2fd6ec86 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.359 00:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.269 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.269 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:09:02.269 00:09:02.269 --- 10.0.0.2 ping statistics --- 00:09:02.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.269 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:09:02.269 00:09:02.269 --- 10.0.0.1 ping statistics --- 00:09:02.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.269 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2221865 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:02.269 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2221865 00:09:02.270 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2221865 ']' 00:09:02.270 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.270 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.270 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.270 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.270 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 [2024-07-16 00:12:53.631748] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:09:02.270 [2024-07-16 00:12:53.631844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.270 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.270 [2024-07-16 00:12:53.695600] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.270 [2024-07-16 00:12:53.803476] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.270 [2024-07-16 00:12:53.803535] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.270 [2024-07-16 00:12:53.803563] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.270 [2024-07-16 00:12:53.803574] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.270 [2024-07-16 00:12:53.803584] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.270 [2024-07-16 00:12:53.803617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.526 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.526 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:02.526 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.526 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.526 00:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.527 00:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.527 00:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.783 [2024-07-16 00:12:54.201512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.783 00:12:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:02.783 00:12:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:02.783 00:12:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:03.041 Malloc1 00:09:03.041 00:12:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:03.299 Malloc2 00:09:03.299 00:12:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:03.557 00:12:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:03.815 00:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.073 [2024-07-16 00:12:55.466159] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.073 00:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:04.073 00:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 49d19568-a86c-4591-8ce3-883f2fd6ec86 -a 10.0.0.2 -s 4420 -i 4 00:09:04.073 00:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.073 00:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:04.073 00:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.073 00:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:04.073 00:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:06.609 [ 0]:0x1 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f0eee2567c43198634bd1290b59bbf 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f0eee2567c43198634bd1290b59bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.609 00:12:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:06.609 [ 0]:0x1 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f0eee2567c43198634bd1290b59bbf 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f0eee2567c43198634bd1290b59bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:06.609 [ 1]:0x2 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56f02b05ac8f496dae1bf5bf4d8dbcfe 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56f02b05ac8f496dae1bf5bf4d8dbcfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.609 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.918 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:07.176 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:07.176 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 49d19568-a86c-4591-8ce3-883f2fd6ec86 -a 10.0.0.2 -s 4420 -i 4 00:09:07.434 00:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:07.434 00:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.434 00:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.434 00:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:07.434 00:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:07.434 00:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:09.330 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:09.587 00:13:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.587 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:09.587 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.587 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.588 [ 0]:0x2 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56f02b05ac8f496dae1bf5bf4d8dbcfe 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56f02b05ac8f496dae1bf5bf4d8dbcfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.588 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:09.845 [ 0]:0x1 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f0eee2567c43198634bd1290b59bbf 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f0eee2567c43198634bd1290b59bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:09.845 [ 1]:0x2 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56f02b05ac8f496dae1bf5bf4d8dbcfe 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56f02b05ac8f496dae1bf5bf4d8dbcfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.845 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:10.105 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:10.362 [ 0]:0x2 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56f02b05ac8f496dae1bf5bf4d8dbcfe 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56f02b05ac8f496dae1bf5bf4d8dbcfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.362 00:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 49d19568-a86c-4591-8ce3-883f2fd6ec86 -a 10.0.0.2 -s 4420 -i 4 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:10.619 00:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:13.172 [ 0]:0x1 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f0eee2567c43198634bd1290b59bbf 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f0eee2567c43198634bd1290b59bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.172 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:13.173 [ 1]:0x2 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56f02b05ac8f496dae1bf5bf4d8dbcfe 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56f02b05ac8f496dae1bf5bf4d8dbcfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:13.173 [ 0]:0x2 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56f02b05ac8f496dae1bf5bf4d8dbcfe 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56f02b05ac8f496dae1bf5bf4d8dbcfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:13.173 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:13.432 [2024-07-16 00:13:04.942882] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:13.432 request: 00:09:13.432 { 00:09:13.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.432 "nsid": 2, 00:09:13.432 "host": "nqn.2016-06.io.spdk:host1", 00:09:13.432 "method": "nvmf_ns_remove_host", 00:09:13.432 "req_id": 1 00:09:13.432 } 00:09:13.432 Got JSON-RPC error response 00:09:13.432 response: 00:09:13.432 { 00:09:13.432 "code": -32602, 00:09:13.432 "message": "Invalid parameters" 00:09:13.432 } 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:13.432 00:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.432 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:13.690 [ 0]:0x2 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56f02b05ac8f496dae1bf5bf4d8dbcfe 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56f02b05ac8f496dae1bf5bf4d8dbcfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2223372 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2223372 /var/tmp/host.sock 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2223372 ']' 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:13.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.690 00:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:13.690 [2024-07-16 00:13:05.291011] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:09:13.690 [2024-07-16 00:13:05.291094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223372 ] 00:09:13.949 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.949 [2024-07-16 00:13:05.353107] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.949 [2024-07-16 00:13:05.473015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.884 00:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.884 00:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:14.884 00:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.884 00:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.450 00:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 31acc102-7c21-4b6f-bd96-755a8b1e6ea9 00:09:15.450 00:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:15.450 00:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 31ACC1027C214B6FBD96755A8B1E6EA9 -i 00:09:15.450 00:13:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 21d81287-1751-4591-8175-5172d68767e2 00:09:15.450 00:13:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:15.707 00:13:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 21D812871751459181755172D68767E2 -i 00:09:15.707 00:13:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:15.964 00:13:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:16.220 00:13:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:16.220 00:13:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:16.783 nvme0n1 00:09:16.783 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:16.783 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:17.348 nvme1n2 00:09:17.348 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:17.348 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:17.348 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:17.348 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:17.348 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:17.604 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:17.604 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:17.604 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:17.604 00:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:17.604 00:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 31acc102-7c21-4b6f-bd96-755a8b1e6ea9 == \3\1\a\c\c\1\0\2\-\7\c\2\1\-\4\b\6\f\-\b\d\9\6\-\7\5\5\a\8\b\1\e\6\e\a\9 ]] 00:09:17.604 00:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:17.604 00:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:17.604 00:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:17.862 00:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 21d81287-1751-4591-8175-5172d68767e2 == \2\1\d\8\1\2\8\7\-\1\7\5\1\-\4\5\9\1\-\8\1\7\5\-\5\1\7\2\d\6\8\7\6\7\e\2 ]] 00:09:17.862 00:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2223372 00:09:17.862 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2223372 ']' 00:09:17.862 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2223372 00:09:17.862 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:18.120 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.120 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2223372 00:09:18.120 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:18.120 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:18.120 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2223372' 00:09:18.120 killing process with pid 2223372 00:09:18.120 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2223372 00:09:18.120 00:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2223372 00:09:18.378 00:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.943 00:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:18.943 00:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:18.943 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.943 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:18.943 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.943 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.944 rmmod nvme_tcp 00:09:18.944 rmmod nvme_fabrics 00:09:18.944 rmmod nvme_keyring 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2221865 ']' 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2221865 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2221865 ']' 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2221865 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221865 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221865' 00:09:18.944 killing process with pid 2221865 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2221865 00:09:18.944 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2221865 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.203 00:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.738 00:13:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.738 00:09:21.738 real 0m21.317s 00:09:21.738 user 0m28.507s 00:09:21.738 sys 0m4.096s 00:09:21.738 00:13:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.738 00:13:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:21.738 ************************************ 00:09:21.738 END TEST nvmf_ns_masking 00:09:21.738 ************************************ 00:09:21.738 00:13:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:21.738 00:13:12 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:21.738 00:13:12 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:21.738 00:13:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.738 00:13:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.738 00:13:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.738 ************************************ 00:09:21.738 START TEST nvmf_nvme_cli 00:09:21.738 ************************************ 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:21.738 * Looking for test storage... 00:09:21.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.738 00:13:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.739 00:13:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:23.648 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:23.648 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:23.648 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:23.648 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.648 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:09:23.649 00:09:23.649 --- 10.0.0.2 ping statistics --- 00:09:23.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.649 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:09:23.649 00:09:23.649 --- 10.0.0.1 ping statistics --- 00:09:23.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.649 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2225944 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2225944 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2225944 ']' 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.649 00:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.649 [2024-07-16 00:13:15.033038] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:09:23.649 [2024-07-16 00:13:15.033127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.649 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.649 [2024-07-16 00:13:15.096123] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.649 [2024-07-16 00:13:15.202308] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.649 [2024-07-16 00:13:15.202362] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.649 [2024-07-16 00:13:15.202390] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.649 [2024-07-16 00:13:15.202402] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.649 [2024-07-16 00:13:15.202411] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.649 [2024-07-16 00:13:15.202478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.649 [2024-07-16 00:13:15.202566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.649 [2024-07-16 00:13:15.202634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.649 [2024-07-16 00:13:15.202631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.907 [2024-07-16 00:13:15.359759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.907 Malloc0 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.907 Malloc1 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.907 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.908 [2024-07-16 00:13:15.440709] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.908 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:24.165 00:09:24.165 Discovery Log Number of Records 2, Generation counter 2 00:09:24.165 =====Discovery Log Entry 0====== 00:09:24.165 trtype: tcp 00:09:24.165 adrfam: ipv4 00:09:24.165 subtype: current discovery subsystem 00:09:24.165 treq: not required 00:09:24.165 portid: 0 00:09:24.165 trsvcid: 4420 00:09:24.165 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:24.165 traddr: 10.0.0.2 00:09:24.165 eflags: explicit discovery connections, duplicate discovery information 00:09:24.165 sectype: none 00:09:24.165 =====Discovery Log Entry 1====== 00:09:24.165 trtype: tcp 00:09:24.165 adrfam: ipv4 00:09:24.165 subtype: nvme subsystem 00:09:24.165 treq: not required 00:09:24.165 portid: 0 00:09:24.165 trsvcid: 4420 00:09:24.165 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:24.165 traddr: 10.0.0.2 00:09:24.165 eflags: none 00:09:24.165 sectype: none 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:24.165 00:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.752 00:13:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:24.752 00:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.752 00:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.752 00:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:24.752 00:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:24.752 00:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.650 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:26.907 /dev/nvme0n1 ]] 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.907 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:27.163 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:27.420 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.421 rmmod nvme_tcp 00:09:27.421 rmmod nvme_fabrics 00:09:27.421 rmmod nvme_keyring 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2225944 ']' 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2225944 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2225944 ']' 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2225944 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225944 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225944' 00:09:27.421 killing process with pid 2225944 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2225944 00:09:27.421 00:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2225944 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.679 00:13:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.209 00:13:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.209 00:09:30.209 real 0m8.531s 00:09:30.209 user 0m16.579s 00:09:30.209 sys 0m2.110s 00:09:30.209 00:13:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.209 00:13:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.209 ************************************ 00:09:30.209 END TEST nvmf_nvme_cli 00:09:30.209 ************************************ 00:09:30.209 00:13:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:30.209 00:13:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:30.209 00:13:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:30.210 00:13:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.210 00:13:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.210 00:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.210 ************************************ 00:09:30.210 START TEST nvmf_vfio_user 00:09:30.210 ************************************ 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:30.210 * Looking for test storage... 00:09:30.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2226804 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2226804' 00:09:30.210 Process pid: 2226804 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2226804 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2226804 ']' 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:30.210 [2024-07-16 00:13:21.497471] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:09:30.210 [2024-07-16 00:13:21.497565] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.210 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.210 [2024-07-16 00:13:21.555210] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.210 [2024-07-16 00:13:21.662243] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.210 [2024-07-16 00:13:21.662296] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.210 [2024-07-16 00:13:21.662325] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.210 [2024-07-16 00:13:21.662336] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.210 [2024-07-16 00:13:21.662346] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.210 [2024-07-16 00:13:21.662416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.210 [2024-07-16 00:13:21.662478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.210 [2024-07-16 00:13:21.662544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.210 [2024-07-16 00:13:21.662546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:30.210 00:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:31.584 00:13:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:31.584 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:31.584 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:31.584 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:31.584 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:31.584 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:31.842 Malloc1 00:09:31.842 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:32.099 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:32.356 00:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:32.613 00:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:32.613 00:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:32.613 00:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:32.870 Malloc2 00:09:32.870 00:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:33.127 00:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:33.384 00:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:33.643 00:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:33.643 00:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:33.643 00:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:33.643 00:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:33.643 00:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:33.643 00:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:33.643 [2024-07-16 00:13:25.123280] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:09:33.643 [2024-07-16 00:13:25.123326] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227225 ] 00:09:33.643 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.643 [2024-07-16 00:13:25.157280] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:33.643 [2024-07-16 00:13:25.164686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:33.643 [2024-07-16 00:13:25.164715] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fee6a649000 00:09:33.643 [2024-07-16 00:13:25.167886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.168690] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.169696] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.170697] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.171704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.172710] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.173720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.174719] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.643 [2024-07-16 00:13:25.175741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:33.643 [2024-07-16 00:13:25.175762] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fee6a63e000 00:09:33.643 [2024-07-16 00:13:25.176930] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:33.643 [2024-07-16 00:13:25.195566] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:33.643 [2024-07-16 00:13:25.195606] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:33.643 [2024-07-16 00:13:25.197846] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:33.643 [2024-07-16 00:13:25.197926] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:33.643 [2024-07-16 00:13:25.198031] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:33.643 [2024-07-16 00:13:25.198062] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:33.644 [2024-07-16 00:13:25.198073] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:33.644 [2024-07-16 00:13:25.198845] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:33.644 [2024-07-16 00:13:25.198885] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:33.644 [2024-07-16 00:13:25.198900] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:33.644 [2024-07-16 00:13:25.199849] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:33.644 [2024-07-16 00:13:25.199890] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:33.644 [2024-07-16 00:13:25.199905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:33.644 [2024-07-16 00:13:25.200856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:33.644 [2024-07-16 00:13:25.200896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:33.644 [2024-07-16 00:13:25.201873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:33.644 [2024-07-16 00:13:25.201898] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:33.644 [2024-07-16 00:13:25.201907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:33.644 [2024-07-16 00:13:25.201939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:33.644 [2024-07-16 00:13:25.202049] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:33.644 [2024-07-16 00:13:25.202058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:33.644 [2024-07-16 00:13:25.202067] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:33.644 [2024-07-16 00:13:25.202886] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:33.644 [2024-07-16 00:13:25.203868] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:33.644 [2024-07-16 00:13:25.204874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:33.644 [2024-07-16 00:13:25.205870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:33.644 [2024-07-16 00:13:25.206011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:33.644 [2024-07-16 00:13:25.206909] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:33.644 [2024-07-16 00:13:25.206939] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:33.644 [2024-07-16 00:13:25.206949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.206974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:33.644 [2024-07-16 00:13:25.206988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207019] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.644 [2024-07-16 00:13:25.207029] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.644 [2024-07-16 00:13:25.207051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.207098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.207115] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:33.644 [2024-07-16 00:13:25.207124] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:33.644 [2024-07-16 00:13:25.207131] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:33.644 [2024-07-16 00:13:25.207140] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:33.644 [2024-07-16 00:13:25.207147] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:33.644 [2024-07-16 00:13:25.207155] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:33.644 [2024-07-16 00:13:25.207163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.207230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.207262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.644 [2024-07-16 00:13:25.207274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.644 [2024-07-16 00:13:25.207285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.644 [2024-07-16 00:13:25.207296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.644 [2024-07-16 00:13:25.207304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.207446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.207457] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:33.644 [2024-07-16 00:13:25.207465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.207518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.207583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207613] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:33.644 [2024-07-16 00:13:25.207620] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:33.644 [2024-07-16 00:13:25.207629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.207647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.207666] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:33.644 [2024-07-16 00:13:25.207688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207715] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.644 [2024-07-16 00:13:25.207722] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.644 [2024-07-16 00:13:25.207731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.207753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.207776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207802] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.644 [2024-07-16 00:13:25.207810] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.644 [2024-07-16 00:13:25.207819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.207836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.207851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207954] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:33.644 [2024-07-16 00:13:25.207962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:33.644 [2024-07-16 00:13:25.207971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:33.644 [2024-07-16 00:13:25.207999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:33.644 [2024-07-16 00:13:25.208019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:33.644 [2024-07-16 00:13:25.208038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:33.645 [2024-07-16 00:13:25.208050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:33.645 [2024-07-16 00:13:25.208066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:33.645 [2024-07-16 00:13:25.208077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:33.645 [2024-07-16 00:13:25.208094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:33.645 [2024-07-16 00:13:25.208105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:33.645 [2024-07-16 00:13:25.208128] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:33.645 [2024-07-16 00:13:25.208138] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:33.645 [2024-07-16 00:13:25.208144] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:33.645 [2024-07-16 00:13:25.208151] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:33.645 [2024-07-16 00:13:25.208175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:33.645 [2024-07-16 00:13:25.208187] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:33.645 [2024-07-16 00:13:25.208195] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:33.645 [2024-07-16 00:13:25.208203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:33.645 [2024-07-16 00:13:25.208218] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:33.645 [2024-07-16 00:13:25.208241] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.645 [2024-07-16 00:13:25.208250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.645 [2024-07-16 00:13:25.208262] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:33.645 [2024-07-16 00:13:25.208270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:33.645 [2024-07-16 00:13:25.208278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:33.645 [2024-07-16 00:13:25.208289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:33.645 [2024-07-16 00:13:25.208308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:33.645 [2024-07-16 00:13:25.208327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:33.645 [2024-07-16 00:13:25.208338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:33.645 ===================================================== 00:09:33.645 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:33.645 ===================================================== 00:09:33.645 Controller Capabilities/Features 00:09:33.645 ================================ 00:09:33.645 Vendor ID: 4e58 00:09:33.645 Subsystem Vendor ID: 4e58 00:09:33.645 Serial Number: SPDK1 00:09:33.645 Model Number: SPDK bdev Controller 00:09:33.645 Firmware Version: 24.09 00:09:33.645 Recommended Arb Burst: 6 00:09:33.645 IEEE OUI Identifier: 8d 6b 50 00:09:33.645 Multi-path I/O 00:09:33.645 May have multiple subsystem ports: Yes 00:09:33.645 May have multiple controllers: Yes 00:09:33.645 Associated with SR-IOV VF: No 00:09:33.645 Max Data Transfer Size: 131072 00:09:33.645 Max Number of Namespaces: 32 00:09:33.645 Max Number of I/O Queues: 127 00:09:33.645 NVMe Specification Version (VS): 1.3 00:09:33.645 NVMe Specification Version (Identify): 1.3 00:09:33.645 Maximum Queue Entries: 256 00:09:33.645 Contiguous Queues Required: Yes 00:09:33.645 Arbitration Mechanisms Supported 00:09:33.645 Weighted Round Robin: Not Supported 00:09:33.645 Vendor Specific: Not Supported 00:09:33.645 Reset Timeout: 15000 ms 00:09:33.645 Doorbell Stride: 4 bytes 00:09:33.645 NVM Subsystem Reset: Not Supported 00:09:33.645 Command Sets Supported 00:09:33.645 NVM Command Set: Supported 00:09:33.645 Boot Partition: Not Supported 00:09:33.645 Memory Page Size Minimum: 4096 bytes 00:09:33.645 Memory Page Size Maximum: 4096 bytes 00:09:33.645 Persistent Memory Region: Not Supported 00:09:33.645 Optional Asynchronous Events Supported 00:09:33.645 Namespace Attribute Notices: Supported 00:09:33.645 Firmware Activation Notices: Not Supported 00:09:33.645 ANA Change Notices: Not Supported 00:09:33.645 PLE Aggregate Log Change Notices: Not Supported 00:09:33.645 LBA Status Info Alert Notices: Not Supported 00:09:33.645 EGE Aggregate Log Change Notices: Not Supported 00:09:33.645 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.645 Zone Descriptor Change Notices: Not Supported 00:09:33.645 Discovery Log Change Notices: Not Supported 00:09:33.645 Controller Attributes 00:09:33.645 128-bit Host Identifier: Supported 00:09:33.645 Non-Operational Permissive Mode: Not Supported 00:09:33.645 NVM Sets: Not Supported 00:09:33.645 Read Recovery Levels: Not Supported 00:09:33.645 Endurance Groups: Not Supported 00:09:33.645 Predictable Latency Mode: Not Supported 00:09:33.645 Traffic Based Keep ALive: Not Supported 00:09:33.645 Namespace Granularity: Not Supported 00:09:33.645 SQ Associations: Not Supported 00:09:33.645 UUID List: Not Supported 00:09:33.645 Multi-Domain Subsystem: Not Supported 00:09:33.645 Fixed Capacity Management: Not Supported 00:09:33.645 Variable Capacity Management: Not Supported 00:09:33.645 Delete Endurance Group: Not Supported 00:09:33.645 Delete NVM Set: Not Supported 00:09:33.645 Extended LBA Formats Supported: Not Supported 00:09:33.645 Flexible Data Placement Supported: Not Supported 00:09:33.645 00:09:33.645 Controller Memory Buffer Support 00:09:33.645 ================================ 00:09:33.645 Supported: No 00:09:33.645 00:09:33.645 Persistent Memory Region Support 00:09:33.645 ================================ 00:09:33.645 Supported: No 00:09:33.645 00:09:33.645 Admin Command Set Attributes 00:09:33.645 ============================ 00:09:33.645 Security Send/Receive: Not Supported 00:09:33.645 Format NVM: Not Supported 00:09:33.645 Firmware Activate/Download: Not Supported 00:09:33.645 Namespace Management: Not Supported 00:09:33.645 Device Self-Test: Not Supported 00:09:33.645 Directives: Not Supported 00:09:33.645 NVMe-MI: Not Supported 00:09:33.645 Virtualization Management: Not Supported 00:09:33.645 Doorbell Buffer Config: Not Supported 00:09:33.645 Get LBA Status Capability: Not Supported 00:09:33.645 Command & Feature Lockdown Capability: Not Supported 00:09:33.645 Abort Command Limit: 4 00:09:33.645 Async Event Request Limit: 4 00:09:33.645 Number of Firmware Slots: N/A 00:09:33.645 Firmware Slot 1 Read-Only: N/A 00:09:33.645 Firmware Activation Without Reset: N/A 00:09:33.645 Multiple Update Detection Support: N/A 00:09:33.645 Firmware Update Granularity: No Information Provided 00:09:33.645 Per-Namespace SMART Log: No 00:09:33.645 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.645 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:33.645 Command Effects Log Page: Supported 00:09:33.645 Get Log Page Extended Data: Supported 00:09:33.645 Telemetry Log Pages: Not Supported 00:09:33.645 Persistent Event Log Pages: Not Supported 00:09:33.645 Supported Log Pages Log Page: May Support 00:09:33.645 Commands Supported & Effects Log Page: Not Supported 00:09:33.645 Feature Identifiers & Effects Log Page:May Support 00:09:33.645 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.645 Data Area 4 for Telemetry Log: Not Supported 00:09:33.645 Error Log Page Entries Supported: 128 00:09:33.645 Keep Alive: Supported 00:09:33.645 Keep Alive Granularity: 10000 ms 00:09:33.645 00:09:33.645 NVM Command Set Attributes 00:09:33.645 ========================== 00:09:33.645 Submission Queue Entry Size 00:09:33.645 Max: 64 00:09:33.645 Min: 64 00:09:33.645 Completion Queue Entry Size 00:09:33.645 Max: 16 00:09:33.645 Min: 16 00:09:33.645 Number of Namespaces: 32 00:09:33.645 Compare Command: Supported 00:09:33.645 Write Uncorrectable Command: Not Supported 00:09:33.645 Dataset Management Command: Supported 00:09:33.645 Write Zeroes Command: Supported 00:09:33.645 Set Features Save Field: Not Supported 00:09:33.645 Reservations: Not Supported 00:09:33.645 Timestamp: Not Supported 00:09:33.645 Copy: Supported 00:09:33.645 Volatile Write Cache: Present 00:09:33.645 Atomic Write Unit (Normal): 1 00:09:33.645 Atomic Write Unit (PFail): 1 00:09:33.645 Atomic Compare & Write Unit: 1 00:09:33.645 Fused Compare & Write: Supported 00:09:33.645 Scatter-Gather List 00:09:33.645 SGL Command Set: Supported (Dword aligned) 00:09:33.645 SGL Keyed: Not Supported 00:09:33.645 SGL Bit Bucket Descriptor: Not Supported 00:09:33.645 SGL Metadata Pointer: Not Supported 00:09:33.645 Oversized SGL: Not Supported 00:09:33.645 SGL Metadata Address: Not Supported 00:09:33.645 SGL Offset: Not Supported 00:09:33.645 Transport SGL Data Block: Not Supported 00:09:33.645 Replay Protected Memory Block: Not Supported 00:09:33.645 00:09:33.645 Firmware Slot Information 00:09:33.645 ========================= 00:09:33.645 Active slot: 1 00:09:33.645 Slot 1 Firmware Revision: 24.09 00:09:33.645 00:09:33.645 00:09:33.645 Commands Supported and Effects 00:09:33.645 ============================== 00:09:33.645 Admin Commands 00:09:33.645 -------------- 00:09:33.645 Get Log Page (02h): Supported 00:09:33.645 Identify (06h): Supported 00:09:33.645 Abort (08h): Supported 00:09:33.645 Set Features (09h): Supported 00:09:33.646 Get Features (0Ah): Supported 00:09:33.646 Asynchronous Event Request (0Ch): Supported 00:09:33.646 Keep Alive (18h): Supported 00:09:33.646 I/O Commands 00:09:33.646 ------------ 00:09:33.646 Flush (00h): Supported LBA-Change 00:09:33.646 Write (01h): Supported LBA-Change 00:09:33.646 Read (02h): Supported 00:09:33.646 Compare (05h): Supported 00:09:33.646 Write Zeroes (08h): Supported LBA-Change 00:09:33.646 Dataset Management (09h): Supported LBA-Change 00:09:33.646 Copy (19h): Supported LBA-Change 00:09:33.646 00:09:33.646 Error Log 00:09:33.646 ========= 00:09:33.646 00:09:33.646 Arbitration 00:09:33.646 =========== 00:09:33.646 Arbitration Burst: 1 00:09:33.646 00:09:33.646 Power Management 00:09:33.646 ================ 00:09:33.646 Number of Power States: 1 00:09:33.646 Current Power State: Power State #0 00:09:33.646 Power State #0: 00:09:33.646 Max Power: 0.00 W 00:09:33.646 Non-Operational State: Operational 00:09:33.646 Entry Latency: Not Reported 00:09:33.646 Exit Latency: Not Reported 00:09:33.646 Relative Read Throughput: 0 00:09:33.646 Relative Read Latency: 0 00:09:33.646 Relative Write Throughput: 0 00:09:33.646 Relative Write Latency: 0 00:09:33.646 Idle Power: Not Reported 00:09:33.646 Active Power: Not Reported 00:09:33.646 Non-Operational Permissive Mode: Not Supported 00:09:33.646 00:09:33.646 Health Information 00:09:33.646 ================== 00:09:33.646 Critical Warnings: 00:09:33.646 Available Spare Space: OK 00:09:33.646 Temperature: OK 00:09:33.646 Device Reliability: OK 00:09:33.646 Read Only: No 00:09:33.646 Volatile Memory Backup: OK 00:09:33.646 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:33.646 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:33.646 Available Spare: 0% 00:09:33.646 Available Sp[2024-07-16 00:13:25.208456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:33.646 [2024-07-16 00:13:25.208471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:33.646 [2024-07-16 00:13:25.208515] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:33.646 [2024-07-16 00:13:25.208532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.646 [2024-07-16 00:13:25.208542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.646 [2024-07-16 00:13:25.208552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.646 [2024-07-16 00:13:25.208561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.646 [2024-07-16 00:13:25.208924] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:33.646 [2024-07-16 00:13:25.208949] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:33.646 [2024-07-16 00:13:25.209940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:33.646 [2024-07-16 00:13:25.210015] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:33.646 [2024-07-16 00:13:25.210030] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:33.646 [2024-07-16 00:13:25.210937] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:33.646 [2024-07-16 00:13:25.210961] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:33.646 [2024-07-16 00:13:25.211020] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:33.646 [2024-07-16 00:13:25.217889] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:33.904 are Threshold: 0% 00:09:33.904 Life Percentage Used: 0% 00:09:33.904 Data Units Read: 0 00:09:33.904 Data Units Written: 0 00:09:33.904 Host Read Commands: 0 00:09:33.904 Host Write Commands: 0 00:09:33.904 Controller Busy Time: 0 minutes 00:09:33.904 Power Cycles: 0 00:09:33.904 Power On Hours: 0 hours 00:09:33.904 Unsafe Shutdowns: 0 00:09:33.904 Unrecoverable Media Errors: 0 00:09:33.904 Lifetime Error Log Entries: 0 00:09:33.904 Warning Temperature Time: 0 minutes 00:09:33.904 Critical Temperature Time: 0 minutes 00:09:33.904 00:09:33.904 Number of Queues 00:09:33.904 ================ 00:09:33.904 Number of I/O Submission Queues: 127 00:09:33.904 Number of I/O Completion Queues: 127 00:09:33.904 00:09:33.904 Active Namespaces 00:09:33.904 ================= 00:09:33.904 Namespace ID:1 00:09:33.904 Error Recovery Timeout: Unlimited 00:09:33.904 Command Set Identifier: NVM (00h) 00:09:33.904 Deallocate: Supported 00:09:33.904 Deallocated/Unwritten Error: Not Supported 00:09:33.904 Deallocated Read Value: Unknown 00:09:33.904 Deallocate in Write Zeroes: Not Supported 00:09:33.904 Deallocated Guard Field: 0xFFFF 00:09:33.904 Flush: Supported 00:09:33.904 Reservation: Supported 00:09:33.904 Namespace Sharing Capabilities: Multiple Controllers 00:09:33.904 Size (in LBAs): 131072 (0GiB) 00:09:33.904 Capacity (in LBAs): 131072 (0GiB) 00:09:33.904 Utilization (in LBAs): 131072 (0GiB) 00:09:33.904 NGUID: B097F370A36947969180860786327B30 00:09:33.904 UUID: b097f370-a369-4796-9180-860786327b30 00:09:33.904 Thin Provisioning: Not Supported 00:09:33.904 Per-NS Atomic Units: Yes 00:09:33.904 Atomic Boundary Size (Normal): 0 00:09:33.904 Atomic Boundary Size (PFail): 0 00:09:33.904 Atomic Boundary Offset: 0 00:09:33.904 Maximum Single Source Range Length: 65535 00:09:33.904 Maximum Copy Length: 65535 00:09:33.904 Maximum Source Range Count: 1 00:09:33.904 NGUID/EUI64 Never Reused: No 00:09:33.904 Namespace Write Protected: No 00:09:33.904 Number of LBA Formats: 1 00:09:33.904 Current LBA Format: LBA Format #00 00:09:33.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.904 00:09:33.904 00:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:33.904 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.904 [2024-07-16 00:13:25.447752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:39.165 Initializing NVMe Controllers 00:09:39.165 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:39.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:39.165 Initialization complete. Launching workers. 00:09:39.165 ======================================================== 00:09:39.165 Latency(us) 00:09:39.165 Device Information : IOPS MiB/s Average min max 00:09:39.165 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34162.36 133.45 3745.98 1182.14 7701.68 00:09:39.165 ======================================================== 00:09:39.165 Total : 34162.36 133.45 3745.98 1182.14 7701.68 00:09:39.165 00:09:39.165 [2024-07-16 00:13:30.467171] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:39.165 00:13:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:39.165 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.165 [2024-07-16 00:13:30.711369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:44.468 Initializing NVMe Controllers 00:09:44.468 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:44.468 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:44.468 Initialization complete. Launching workers. 00:09:44.468 ======================================================== 00:09:44.468 Latency(us) 00:09:44.468 Device Information : IOPS MiB/s Average min max 00:09:44.468 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15972.10 62.39 8019.21 5970.52 15842.45 00:09:44.468 ======================================================== 00:09:44.468 Total : 15972.10 62.39 8019.21 5970.52 15842.45 00:09:44.468 00:09:44.468 [2024-07-16 00:13:35.751116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:44.468 00:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:44.468 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.468 [2024-07-16 00:13:35.970203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:49.740 [2024-07-16 00:13:41.039167] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:49.740 Initializing NVMe Controllers 00:09:49.740 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:49.740 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:49.740 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:49.740 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:49.740 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:49.740 Initialization complete. Launching workers. 00:09:49.740 Starting thread on core 2 00:09:49.740 Starting thread on core 3 00:09:49.740 Starting thread on core 1 00:09:49.740 00:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:49.740 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.740 [2024-07-16 00:13:41.350420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.025 [2024-07-16 00:13:44.422791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.025 Initializing NVMe Controllers 00:09:53.025 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.025 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.025 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:53.025 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:53.025 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:53.025 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:53.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:53.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:53.025 Initialization complete. Launching workers. 00:09:53.025 Starting thread on core 1 with urgent priority queue 00:09:53.025 Starting thread on core 2 with urgent priority queue 00:09:53.025 Starting thread on core 3 with urgent priority queue 00:09:53.025 Starting thread on core 0 with urgent priority queue 00:09:53.025 SPDK bdev Controller (SPDK1 ) core 0: 5726.67 IO/s 17.46 secs/100000 ios 00:09:53.025 SPDK bdev Controller (SPDK1 ) core 1: 5571.33 IO/s 17.95 secs/100000 ios 00:09:53.025 SPDK bdev Controller (SPDK1 ) core 2: 5304.67 IO/s 18.85 secs/100000 ios 00:09:53.025 SPDK bdev Controller (SPDK1 ) core 3: 5602.67 IO/s 17.85 secs/100000 ios 00:09:53.025 ======================================================== 00:09:53.025 00:09:53.025 00:13:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:53.025 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.284 [2024-07-16 00:13:44.717451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.284 Initializing NVMe Controllers 00:09:53.284 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.284 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.284 Namespace ID: 1 size: 0GB 00:09:53.284 Initialization complete. 00:09:53.284 INFO: using host memory buffer for IO 00:09:53.284 Hello world! 00:09:53.284 [2024-07-16 00:13:44.753121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.284 00:13:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:53.284 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.542 [2024-07-16 00:13:45.051323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:54.481 Initializing NVMe Controllers 00:09:54.481 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.481 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.481 Initialization complete. Launching workers. 00:09:54.481 submit (in ns) avg, min, max = 8853.1, 3534.4, 4015341.1 00:09:54.481 complete (in ns) avg, min, max = 24926.1, 2062.2, 4016114.4 00:09:54.481 00:09:54.481 Submit histogram 00:09:54.481 ================ 00:09:54.481 Range in us Cumulative Count 00:09:54.481 3.532 - 3.556: 0.0689% ( 9) 00:09:54.481 3.556 - 3.579: 0.4826% ( 54) 00:09:54.481 3.579 - 3.603: 1.4477% ( 126) 00:09:54.481 3.603 - 3.627: 4.5883% ( 410) 00:09:54.481 3.627 - 3.650: 10.3485% ( 752) 00:09:54.481 3.650 - 3.674: 18.1003% ( 1012) 00:09:54.481 3.674 - 3.698: 27.2999% ( 1201) 00:09:54.481 3.698 - 3.721: 37.0663% ( 1275) 00:09:54.481 3.721 - 3.745: 44.9789% ( 1033) 00:09:54.481 3.745 - 3.769: 51.3213% ( 828) 00:09:54.481 3.769 - 3.793: 55.6722% ( 568) 00:09:54.481 3.793 - 3.816: 59.4332% ( 491) 00:09:54.481 3.816 - 3.840: 62.3746% ( 384) 00:09:54.481 3.840 - 3.864: 66.1892% ( 498) 00:09:54.481 3.864 - 3.887: 69.7281% ( 462) 00:09:54.481 3.887 - 3.911: 73.9946% ( 557) 00:09:54.481 3.911 - 3.935: 78.5982% ( 601) 00:09:54.481 3.935 - 3.959: 82.3975% ( 496) 00:09:54.481 3.959 - 3.982: 85.4232% ( 395) 00:09:54.481 3.982 - 4.006: 87.3075% ( 246) 00:09:54.481 4.006 - 4.030: 88.7859% ( 193) 00:09:54.481 4.030 - 4.053: 90.2183% ( 187) 00:09:54.481 4.053 - 4.077: 91.3750% ( 151) 00:09:54.481 4.077 - 4.101: 92.2635% ( 116) 00:09:54.481 4.101 - 4.124: 93.1750% ( 119) 00:09:54.481 4.124 - 4.148: 94.0406% ( 113) 00:09:54.481 4.148 - 4.172: 94.6687% ( 82) 00:09:54.481 4.172 - 4.196: 95.2126% ( 71) 00:09:54.481 4.196 - 4.219: 95.6722% ( 60) 00:09:54.481 4.219 - 4.243: 95.9403% ( 35) 00:09:54.481 4.243 - 4.267: 96.1011% ( 21) 00:09:54.481 4.267 - 4.290: 96.2696% ( 22) 00:09:54.481 4.290 - 4.314: 96.3998% ( 17) 00:09:54.481 4.314 - 4.338: 96.4994% ( 13) 00:09:54.481 4.338 - 4.361: 96.5913% ( 12) 00:09:54.481 4.361 - 4.385: 96.7139% ( 16) 00:09:54.481 4.385 - 4.409: 96.8211% ( 14) 00:09:54.481 4.409 - 4.433: 96.8977% ( 10) 00:09:54.481 4.433 - 4.456: 96.9514% ( 7) 00:09:54.481 4.456 - 4.480: 97.0126% ( 8) 00:09:54.481 4.480 - 4.504: 97.0739% ( 8) 00:09:54.481 4.504 - 4.527: 97.1199% ( 6) 00:09:54.481 4.527 - 4.551: 97.1275% ( 1) 00:09:54.481 4.551 - 4.575: 97.1505% ( 3) 00:09:54.481 4.575 - 4.599: 97.1582% ( 1) 00:09:54.481 4.599 - 4.622: 97.1888% ( 4) 00:09:54.481 4.622 - 4.646: 97.2041% ( 2) 00:09:54.481 4.646 - 4.670: 97.2271% ( 3) 00:09:54.481 4.670 - 4.693: 97.2501% ( 3) 00:09:54.481 4.693 - 4.717: 97.2654% ( 2) 00:09:54.481 4.717 - 4.741: 97.3114% ( 6) 00:09:54.481 4.741 - 4.764: 97.3573% ( 6) 00:09:54.481 4.764 - 4.788: 97.4033% ( 6) 00:09:54.481 4.788 - 4.812: 97.4569% ( 7) 00:09:54.481 4.812 - 4.836: 97.5029% ( 6) 00:09:54.481 4.836 - 4.859: 97.5871% ( 11) 00:09:54.481 4.859 - 4.883: 97.6254% ( 5) 00:09:54.481 4.883 - 4.907: 97.6944% ( 9) 00:09:54.481 4.907 - 4.930: 97.7403% ( 6) 00:09:54.481 4.930 - 4.954: 97.7786% ( 5) 00:09:54.481 4.954 - 4.978: 97.8322% ( 7) 00:09:54.481 4.978 - 5.001: 97.8629% ( 4) 00:09:54.481 5.001 - 5.025: 97.8935% ( 4) 00:09:54.481 5.025 - 5.049: 97.9165% ( 3) 00:09:54.481 5.049 - 5.073: 97.9471% ( 4) 00:09:54.481 5.073 - 5.096: 97.9778% ( 4) 00:09:54.481 5.096 - 5.120: 98.0161% ( 5) 00:09:54.481 5.120 - 5.144: 98.0467% ( 4) 00:09:54.481 5.144 - 5.167: 98.0620% ( 2) 00:09:54.481 5.167 - 5.191: 98.0697% ( 1) 00:09:54.481 5.191 - 5.215: 98.1003% ( 4) 00:09:54.481 5.215 - 5.239: 98.1080% ( 1) 00:09:54.481 5.262 - 5.286: 98.1157% ( 1) 00:09:54.481 5.310 - 5.333: 98.1310% ( 2) 00:09:54.481 5.404 - 5.428: 98.1386% ( 1) 00:09:54.481 5.476 - 5.499: 98.1463% ( 1) 00:09:54.481 5.547 - 5.570: 98.1540% ( 1) 00:09:54.481 5.665 - 5.689: 98.1616% ( 1) 00:09:54.481 6.163 - 6.210: 98.1693% ( 1) 00:09:54.481 6.495 - 6.542: 98.1846% ( 2) 00:09:54.481 6.590 - 6.637: 98.1923% ( 1) 00:09:54.481 6.637 - 6.684: 98.1999% ( 1) 00:09:54.481 6.684 - 6.732: 98.2076% ( 1) 00:09:54.481 6.779 - 6.827: 98.2229% ( 2) 00:09:54.481 6.827 - 6.874: 98.2382% ( 2) 00:09:54.481 6.921 - 6.969: 98.2459% ( 1) 00:09:54.481 6.969 - 7.016: 98.2535% ( 1) 00:09:54.481 7.064 - 7.111: 98.2612% ( 1) 00:09:54.481 7.159 - 7.206: 98.2689% ( 1) 00:09:54.481 7.253 - 7.301: 98.2765% ( 1) 00:09:54.481 7.301 - 7.348: 98.2842% ( 1) 00:09:54.481 7.348 - 7.396: 98.3072% ( 3) 00:09:54.481 7.396 - 7.443: 98.3148% ( 1) 00:09:54.481 7.443 - 7.490: 98.3225% ( 1) 00:09:54.481 7.585 - 7.633: 98.3378% ( 2) 00:09:54.481 7.633 - 7.680: 98.3455% ( 1) 00:09:54.481 7.680 - 7.727: 98.3608% ( 2) 00:09:54.481 7.727 - 7.775: 98.3684% ( 1) 00:09:54.481 7.775 - 7.822: 98.3838% ( 2) 00:09:54.481 7.822 - 7.870: 98.3991% ( 2) 00:09:54.481 7.870 - 7.917: 98.4067% ( 1) 00:09:54.481 7.917 - 7.964: 98.4221% ( 2) 00:09:54.481 7.964 - 8.012: 98.4450% ( 3) 00:09:54.481 8.012 - 8.059: 98.4527% ( 1) 00:09:54.481 8.059 - 8.107: 98.4604% ( 1) 00:09:54.481 8.107 - 8.154: 98.4680% ( 1) 00:09:54.481 8.154 - 8.201: 98.4833% ( 2) 00:09:54.481 8.201 - 8.249: 98.4910% ( 1) 00:09:54.481 8.249 - 8.296: 98.4987% ( 1) 00:09:54.481 8.344 - 8.391: 98.5140% ( 2) 00:09:54.481 8.391 - 8.439: 98.5216% ( 1) 00:09:54.481 8.486 - 8.533: 98.5293% ( 1) 00:09:54.481 8.533 - 8.581: 98.5446% ( 2) 00:09:54.481 8.770 - 8.818: 98.5523% ( 1) 00:09:54.481 8.818 - 8.865: 98.5599% ( 1) 00:09:54.481 8.865 - 8.913: 98.5676% ( 1) 00:09:54.481 9.055 - 9.102: 98.5753% ( 1) 00:09:54.481 9.244 - 9.292: 98.5829% ( 1) 00:09:54.481 9.339 - 9.387: 98.5906% ( 1) 00:09:54.481 9.387 - 9.434: 98.6059% ( 2) 00:09:54.481 9.576 - 9.624: 98.6136% ( 1) 00:09:54.481 9.766 - 9.813: 98.6212% ( 1) 00:09:54.481 9.861 - 9.908: 98.6289% ( 1) 00:09:54.481 10.572 - 10.619: 98.6365% ( 1) 00:09:54.481 10.667 - 10.714: 98.6442% ( 1) 00:09:54.481 10.761 - 10.809: 98.6519% ( 1) 00:09:54.481 10.856 - 10.904: 98.6595% ( 1) 00:09:54.481 10.904 - 10.951: 98.6672% ( 1) 00:09:54.481 10.951 - 10.999: 98.6748% ( 1) 00:09:54.481 11.046 - 11.093: 98.6902% ( 2) 00:09:54.481 11.188 - 11.236: 98.6978% ( 1) 00:09:54.481 11.378 - 11.425: 98.7055% ( 1) 00:09:54.481 11.567 - 11.615: 98.7131% ( 1) 00:09:54.481 12.089 - 12.136: 98.7208% ( 1) 00:09:54.481 12.136 - 12.231: 98.7285% ( 1) 00:09:54.481 12.231 - 12.326: 98.7361% ( 1) 00:09:54.481 12.326 - 12.421: 98.7438% ( 1) 00:09:54.481 12.610 - 12.705: 98.7591% ( 2) 00:09:54.481 12.705 - 12.800: 98.7668% ( 1) 00:09:54.481 12.895 - 12.990: 98.7744% ( 1) 00:09:54.481 13.369 - 13.464: 98.7821% ( 1) 00:09:54.481 13.464 - 13.559: 98.8051% ( 3) 00:09:54.481 13.843 - 13.938: 98.8204% ( 2) 00:09:54.481 14.033 - 14.127: 98.8434% ( 3) 00:09:54.481 14.317 - 14.412: 98.8510% ( 1) 00:09:54.481 14.601 - 14.696: 98.8587% ( 1) 00:09:54.481 15.360 - 15.455: 98.8740% ( 2) 00:09:54.481 16.972 - 17.067: 98.8817% ( 1) 00:09:54.481 17.256 - 17.351: 98.8893% ( 1) 00:09:54.481 17.351 - 17.446: 98.8970% ( 1) 00:09:54.481 17.446 - 17.541: 98.9200% ( 3) 00:09:54.481 17.541 - 17.636: 98.9736% ( 7) 00:09:54.481 17.636 - 17.730: 99.0195% ( 6) 00:09:54.481 17.730 - 17.825: 99.0272% ( 1) 00:09:54.481 17.825 - 17.920: 99.0578% ( 4) 00:09:54.481 17.920 - 18.015: 99.0808% ( 3) 00:09:54.481 18.015 - 18.110: 99.1115% ( 4) 00:09:54.481 18.110 - 18.204: 99.1957% ( 11) 00:09:54.481 18.204 - 18.299: 99.2646% ( 9) 00:09:54.481 18.299 - 18.394: 99.3412% ( 10) 00:09:54.481 18.394 - 18.489: 99.4561% ( 15) 00:09:54.481 18.489 - 18.584: 99.5481% ( 12) 00:09:54.481 18.584 - 18.679: 99.5940% ( 6) 00:09:54.481 18.679 - 18.773: 99.6323% ( 5) 00:09:54.481 18.773 - 18.868: 99.6706% ( 5) 00:09:54.481 18.868 - 18.963: 99.6936% ( 3) 00:09:54.481 18.963 - 19.058: 99.7166% ( 3) 00:09:54.481 19.058 - 19.153: 99.7396% ( 3) 00:09:54.481 19.153 - 19.247: 99.7549% ( 2) 00:09:54.481 19.247 - 19.342: 99.7625% ( 1) 00:09:54.481 19.437 - 19.532: 99.7702% ( 1) 00:09:54.482 19.532 - 19.627: 99.7932% ( 3) 00:09:54.482 19.721 - 19.816: 99.8085% ( 2) 00:09:54.482 19.816 - 19.911: 99.8162% ( 1) 00:09:54.482 19.911 - 20.006: 99.8238% ( 1) 00:09:54.482 21.239 - 21.333: 99.8315% ( 1) 00:09:54.482 21.428 - 21.523: 99.8391% ( 1) 00:09:54.482 23.040 - 23.135: 99.8468% ( 1) 00:09:54.482 24.841 - 25.031: 99.8545% ( 1) 00:09:54.482 26.359 - 26.548: 99.8621% ( 1) 00:09:54.482 26.738 - 26.927: 99.8698% ( 1) 00:09:54.482 27.496 - 27.686: 99.8774% ( 1) 00:09:54.482 3276.800 - 3301.073: 99.8851% ( 1) 00:09:54.482 3980.705 - 4004.978: 99.9770% ( 12) 00:09:54.482 4004.978 - 4029.250: 100.0000% ( 3) 00:09:54.482 00:09:54.482 Complete histogram 00:09:54.482 ================== 00:09:54.482 Range in us Cumulative Count 00:09:54.482 2.062 - 2.074: 4.9636% ( 648) 00:09:54.482 2.074 - 2.086: 16.6909% ( 1531) 00:09:54.482 2.086 - 2.098: 19.5710% ( 376) 00:09:54.482 2.098 - 2.110: 45.2547% ( 3353) 00:09:54.482 2.110 - 2.121: 59.3949% ( 1846) 00:09:54.482 2.121 - 2.133: 61.3022% ( 249) 00:09:54.482 2.133 - 2.145: 65.9134% ( 602) 00:09:54.482 2.145 - 2.157: 68.2191% ( 301) 00:09:54.482 2.157 - 2.169: 70.2030% ( 259) 00:09:54.482 2.169 - 2.181: 78.2152% ( 1046) 00:09:54.482 2.181 - 2.193: 81.2332% ( 394) 00:09:54.482 2.193 - 2.204: 82.1218% ( 116) 00:09:54.482 2.204 - 2.216: 83.7227% ( 209) 00:09:54.482 2.216 - 2.228: 84.8104% ( 142) 00:09:54.482 2.228 - 2.240: 86.1586% ( 176) 00:09:54.482 2.240 - 2.252: 90.1953% ( 527) 00:09:54.482 2.252 - 2.264: 92.4090% ( 289) 00:09:54.482 2.264 - 2.276: 93.3512% ( 123) 00:09:54.482 2.276 - 2.287: 93.9180% ( 74) 00:09:54.482 2.287 - 2.299: 94.2015% ( 37) 00:09:54.482 2.299 - 2.311: 94.6227% ( 55) 00:09:54.482 2.311 - 2.323: 94.9062% ( 37) 00:09:54.482 2.323 - 2.335: 95.1206% ( 28) 00:09:54.482 2.335 - 2.347: 95.3581% ( 31) 00:09:54.482 2.347 - 2.359: 95.4653% ( 14) 00:09:54.482 2.359 - 2.370: 95.5573% ( 12) 00:09:54.482 2.370 - 2.382: 95.7105% ( 20) 00:09:54.482 2.382 - 2.394: 95.9709% ( 34) 00:09:54.482 2.394 - 2.406: 96.2007% ( 30) 00:09:54.482 2.406 - 2.418: 96.4381% ( 31) 00:09:54.482 2.418 - 2.430: 96.7445% ( 40) 00:09:54.482 2.430 - 2.441: 97.0969% ( 46) 00:09:54.482 2.441 - 2.453: 97.2654% ( 22) 00:09:54.482 2.453 - 2.465: 97.3727% ( 14) 00:09:54.482 2.465 - 2.477: 97.5105% ( 18) 00:09:54.482 2.477 - 2.489: 97.7173% ( 27) 00:09:54.482 2.489 - 2.501: 97.8016% ( 11) 00:09:54.482 2.501 - 2.513: 97.8935% ( 12) 00:09:54.482 2.513 - 2.524: 97.9778% ( 11) 00:09:54.482 2.524 - 2.536: 98.0467% ( 9) 00:09:54.482 2.536 - 2.548: 98.1003% ( 7) 00:09:54.482 2.548 - 2.560: 98.1233% ( 3) 00:09:54.482 2.560 - 2.572: 98.1310% ( 1) 00:09:54.482 2.572 - 2.584: 98.1846% ( 7) 00:09:54.482 2.607 - 2.619: 98.1923% ( 1) 00:09:54.482 2.619 - 2.631: 98.2076% ( 2) 00:09:54.482 2.643 - 2.655: 98.2229% ( 2) 00:09:54.482 2.679 - 2.690: 98.2306% ( 1) 00:09:54.482 2.690 - 2.702: 98.2382% ( 1) 00:09:54.482 2.702 - 2.714: 98.2459% ( 1) 00:09:54.482 2.714 - 2.726: 98.2535% ( 1) 00:09:54.482 2.726 - 2.738: 98.2689% ( 2) 00:09:54.482 2.738 - 2.750: 98.2765% ( 1) 00:09:54.482 2.750 - 2.761: 98.2842% ( 1) 00:09:54.482 2.773 - 2.785: 98.3148% ( 4) 00:09:54.482 2.785 - 2.797: 98.3225% ( 1) 00:09:54.482 2.797 - 2.809: 98.3301% ( 1) 00:09:54.482 2.821 - 2.833: 98.3378% ( 1) 00:09:54.482 2.833 - 2.844: 98.3531% ( 2) 00:09:54.482 2.844 - 2.856: 98.3608% ( 1) 00:09:54.482 2.892 - 2.904: 98.3684% ( 1) 00:09:54.482 2.904 - 2.916: 98.3761% ( 1) 00:09:54.482 2.916 - 2.927: 98.3914% ( 2) 00:09:54.482 2.987 - 2.999: 98.3991% ( 1) 00:09:54.482 2.999 - 3.010: 98.4067% ( 1) 00:09:54.482 3.058 - 3.081: 98.4144% ( 1) 00:09:54.482 3.081 - 3.105: 9[2024-07-16 00:13:46.070615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:54.740 8.4297% ( 2) 00:09:54.740 3.413 - 3.437: 98.4680% ( 5) 00:09:54.740 3.508 - 3.532: 98.4987% ( 4) 00:09:54.740 3.532 - 3.556: 98.5216% ( 3) 00:09:54.740 3.556 - 3.579: 98.5370% ( 2) 00:09:54.740 3.579 - 3.603: 98.5446% ( 1) 00:09:54.740 3.603 - 3.627: 98.5676% ( 3) 00:09:54.740 3.627 - 3.650: 98.5829% ( 2) 00:09:54.740 3.650 - 3.674: 98.6059% ( 3) 00:09:54.740 3.674 - 3.698: 98.6212% ( 2) 00:09:54.740 3.698 - 3.721: 98.6365% ( 2) 00:09:54.740 3.745 - 3.769: 98.6442% ( 1) 00:09:54.740 3.769 - 3.793: 98.6519% ( 1) 00:09:54.740 3.793 - 3.816: 98.6595% ( 1) 00:09:54.740 3.887 - 3.911: 98.6672% ( 1) 00:09:54.740 3.935 - 3.959: 98.6748% ( 1) 00:09:54.740 4.030 - 4.053: 98.6825% ( 1) 00:09:54.740 4.930 - 4.954: 98.6902% ( 1) 00:09:54.740 4.978 - 5.001: 98.6978% ( 1) 00:09:54.740 5.404 - 5.428: 98.7055% ( 1) 00:09:54.740 5.760 - 5.784: 98.7131% ( 1) 00:09:54.741 5.807 - 5.831: 98.7285% ( 2) 00:09:54.741 6.068 - 6.116: 98.7361% ( 1) 00:09:54.741 6.116 - 6.163: 98.7438% ( 1) 00:09:54.741 6.210 - 6.258: 98.7514% ( 1) 00:09:54.741 6.258 - 6.305: 98.7668% ( 2) 00:09:54.741 6.305 - 6.353: 98.7744% ( 1) 00:09:54.741 6.495 - 6.542: 98.7821% ( 1) 00:09:54.741 6.590 - 6.637: 98.7974% ( 2) 00:09:54.741 6.969 - 7.016: 98.8051% ( 1) 00:09:54.741 7.111 - 7.159: 98.8127% ( 1) 00:09:54.741 7.159 - 7.206: 98.8204% ( 1) 00:09:54.741 7.206 - 7.253: 98.8280% ( 1) 00:09:54.741 7.585 - 7.633: 98.8357% ( 1) 00:09:54.741 15.550 - 15.644: 98.8587% ( 3) 00:09:54.741 15.739 - 15.834: 98.8740% ( 2) 00:09:54.741 15.834 - 15.929: 98.8893% ( 2) 00:09:54.741 15.929 - 16.024: 98.9123% ( 3) 00:09:54.741 16.024 - 16.119: 98.9429% ( 4) 00:09:54.741 16.119 - 16.213: 98.9659% ( 3) 00:09:54.741 16.213 - 16.308: 98.9889% ( 3) 00:09:54.741 16.308 - 16.403: 99.0349% ( 6) 00:09:54.741 16.403 - 16.498: 99.0808% ( 6) 00:09:54.741 16.498 - 16.593: 99.1268% ( 6) 00:09:54.741 16.593 - 16.687: 99.1498% ( 3) 00:09:54.741 16.687 - 16.782: 99.2034% ( 7) 00:09:54.741 16.782 - 16.877: 99.2264% ( 3) 00:09:54.741 16.877 - 16.972: 99.2570% ( 4) 00:09:54.741 16.972 - 17.067: 99.2876% ( 4) 00:09:54.741 17.067 - 17.161: 99.3259% ( 5) 00:09:54.741 17.161 - 17.256: 99.3412% ( 2) 00:09:54.741 17.351 - 17.446: 99.3489% ( 1) 00:09:54.741 17.446 - 17.541: 99.3566% ( 1) 00:09:54.741 17.636 - 17.730: 99.3795% ( 3) 00:09:54.741 17.920 - 18.015: 99.3872% ( 1) 00:09:54.741 18.015 - 18.110: 99.4025% ( 2) 00:09:54.741 18.299 - 18.394: 99.4102% ( 1) 00:09:54.741 18.679 - 18.773: 99.4178% ( 1) 00:09:54.741 23.704 - 23.799: 99.4255% ( 1) 00:09:54.741 82.679 - 83.058: 99.4332% ( 1) 00:09:54.741 3980.705 - 4004.978: 99.8468% ( 54) 00:09:54.741 4004.978 - 4029.250: 100.0000% ( 20) 00:09:54.741 00:09:54.741 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:54.741 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:54.741 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:54.741 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:54.741 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:55.001 [ 00:09:55.001 { 00:09:55.001 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:55.001 "subtype": "Discovery", 00:09:55.001 "listen_addresses": [], 00:09:55.001 "allow_any_host": true, 00:09:55.001 "hosts": [] 00:09:55.001 }, 00:09:55.001 { 00:09:55.001 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:55.001 "subtype": "NVMe", 00:09:55.001 "listen_addresses": [ 00:09:55.001 { 00:09:55.001 "trtype": "VFIOUSER", 00:09:55.001 "adrfam": "IPv4", 00:09:55.001 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:55.001 "trsvcid": "0" 00:09:55.001 } 00:09:55.001 ], 00:09:55.001 "allow_any_host": true, 00:09:55.001 "hosts": [], 00:09:55.001 "serial_number": "SPDK1", 00:09:55.001 "model_number": "SPDK bdev Controller", 00:09:55.001 "max_namespaces": 32, 00:09:55.001 "min_cntlid": 1, 00:09:55.001 "max_cntlid": 65519, 00:09:55.001 "namespaces": [ 00:09:55.001 { 00:09:55.001 "nsid": 1, 00:09:55.001 "bdev_name": "Malloc1", 00:09:55.001 "name": "Malloc1", 00:09:55.001 "nguid": "B097F370A36947969180860786327B30", 00:09:55.001 "uuid": "b097f370-a369-4796-9180-860786327b30" 00:09:55.001 } 00:09:55.001 ] 00:09:55.001 }, 00:09:55.001 { 00:09:55.001 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:55.001 "subtype": "NVMe", 00:09:55.001 "listen_addresses": [ 00:09:55.001 { 00:09:55.001 "trtype": "VFIOUSER", 00:09:55.001 "adrfam": "IPv4", 00:09:55.001 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:55.001 "trsvcid": "0" 00:09:55.001 } 00:09:55.001 ], 00:09:55.001 "allow_any_host": true, 00:09:55.001 "hosts": [], 00:09:55.001 "serial_number": "SPDK2", 00:09:55.001 "model_number": "SPDK bdev Controller", 00:09:55.001 "max_namespaces": 32, 00:09:55.001 "min_cntlid": 1, 00:09:55.001 "max_cntlid": 65519, 00:09:55.001 "namespaces": [ 00:09:55.001 { 00:09:55.001 "nsid": 1, 00:09:55.001 "bdev_name": "Malloc2", 00:09:55.001 "name": "Malloc2", 00:09:55.001 "nguid": "F9F8F21740CF467795CC0FB6095333FB", 00:09:55.001 "uuid": "f9f8f217-40cf-4677-95cc-0fb6095333fb" 00:09:55.001 } 00:09:55.001 ] 00:09:55.001 } 00:09:55.001 ] 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2229750 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:55.001 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:55.001 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.001 [2024-07-16 00:13:46.529408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:55.260 Malloc3 00:09:55.260 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:55.518 [2024-07-16 00:13:46.897262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:55.518 00:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:55.518 Asynchronous Event Request test 00:09:55.518 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:55.518 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:55.518 Registering asynchronous event callbacks... 00:09:55.518 Starting namespace attribute notice tests for all controllers... 00:09:55.518 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:55.518 aer_cb - Changed Namespace 00:09:55.518 Cleaning up... 00:09:55.779 [ 00:09:55.779 { 00:09:55.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:55.779 "subtype": "Discovery", 00:09:55.779 "listen_addresses": [], 00:09:55.779 "allow_any_host": true, 00:09:55.779 "hosts": [] 00:09:55.779 }, 00:09:55.779 { 00:09:55.779 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:55.779 "subtype": "NVMe", 00:09:55.779 "listen_addresses": [ 00:09:55.779 { 00:09:55.779 "trtype": "VFIOUSER", 00:09:55.779 "adrfam": "IPv4", 00:09:55.779 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:55.779 "trsvcid": "0" 00:09:55.779 } 00:09:55.779 ], 00:09:55.779 "allow_any_host": true, 00:09:55.779 "hosts": [], 00:09:55.779 "serial_number": "SPDK1", 00:09:55.779 "model_number": "SPDK bdev Controller", 00:09:55.779 "max_namespaces": 32, 00:09:55.779 "min_cntlid": 1, 00:09:55.779 "max_cntlid": 65519, 00:09:55.779 "namespaces": [ 00:09:55.779 { 00:09:55.779 "nsid": 1, 00:09:55.779 "bdev_name": "Malloc1", 00:09:55.779 "name": "Malloc1", 00:09:55.779 "nguid": "B097F370A36947969180860786327B30", 00:09:55.779 "uuid": "b097f370-a369-4796-9180-860786327b30" 00:09:55.779 }, 00:09:55.779 { 00:09:55.779 "nsid": 2, 00:09:55.779 "bdev_name": "Malloc3", 00:09:55.779 "name": "Malloc3", 00:09:55.779 "nguid": "CD87A798FB9645FB8797680657FDA17B", 00:09:55.779 "uuid": "cd87a798-fb96-45fb-8797-680657fda17b" 00:09:55.779 } 00:09:55.779 ] 00:09:55.779 }, 00:09:55.779 { 00:09:55.779 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:55.779 "subtype": "NVMe", 00:09:55.779 "listen_addresses": [ 00:09:55.779 { 00:09:55.779 "trtype": "VFIOUSER", 00:09:55.779 "adrfam": "IPv4", 00:09:55.779 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:55.779 "trsvcid": "0" 00:09:55.779 } 00:09:55.779 ], 00:09:55.779 "allow_any_host": true, 00:09:55.779 "hosts": [], 00:09:55.779 "serial_number": "SPDK2", 00:09:55.779 "model_number": "SPDK bdev Controller", 00:09:55.779 "max_namespaces": 32, 00:09:55.779 "min_cntlid": 1, 00:09:55.779 "max_cntlid": 65519, 00:09:55.779 "namespaces": [ 00:09:55.779 { 00:09:55.779 "nsid": 1, 00:09:55.779 "bdev_name": "Malloc2", 00:09:55.779 "name": "Malloc2", 00:09:55.779 "nguid": "F9F8F21740CF467795CC0FB6095333FB", 00:09:55.779 "uuid": "f9f8f217-40cf-4677-95cc-0fb6095333fb" 00:09:55.779 } 00:09:55.779 ] 00:09:55.779 } 00:09:55.779 ] 00:09:55.779 00:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2229750 00:09:55.779 00:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:55.779 00:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:55.779 00:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:55.779 00:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:55.779 [2024-07-16 00:13:47.174114] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:09:55.779 [2024-07-16 00:13:47.174160] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229879 ] 00:09:55.779 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.779 [2024-07-16 00:13:47.208084] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:55.779 [2024-07-16 00:13:47.216200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:55.779 [2024-07-16 00:13:47.216228] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f56475e3000 00:09:55.779 [2024-07-16 00:13:47.217206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.779 [2024-07-16 00:13:47.218211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.779 [2024-07-16 00:13:47.219202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.780 [2024-07-16 00:13:47.220215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.780 [2024-07-16 00:13:47.221221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.780 [2024-07-16 00:13:47.222227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.780 [2024-07-16 00:13:47.223234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.780 [2024-07-16 00:13:47.224257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.780 [2024-07-16 00:13:47.225251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:55.780 [2024-07-16 00:13:47.225274] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f56475d8000 00:09:55.780 [2024-07-16 00:13:47.226386] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:55.780 [2024-07-16 00:13:47.242530] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:55.780 [2024-07-16 00:13:47.242566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:55.780 [2024-07-16 00:13:47.247668] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:55.780 [2024-07-16 00:13:47.247722] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:55.780 [2024-07-16 00:13:47.247813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:55.780 [2024-07-16 00:13:47.247838] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:55.780 [2024-07-16 00:13:47.247849] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:55.780 [2024-07-16 00:13:47.248677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:55.780 [2024-07-16 00:13:47.248698] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:55.780 [2024-07-16 00:13:47.248710] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:55.780 [2024-07-16 00:13:47.249681] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:55.780 [2024-07-16 00:13:47.249701] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:55.780 [2024-07-16 00:13:47.249715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:55.780 [2024-07-16 00:13:47.250685] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:55.780 [2024-07-16 00:13:47.250706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:55.780 [2024-07-16 00:13:47.251696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:55.780 [2024-07-16 00:13:47.251717] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:55.780 [2024-07-16 00:13:47.251731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:55.780 [2024-07-16 00:13:47.251743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:55.780 [2024-07-16 00:13:47.251852] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:55.780 [2024-07-16 00:13:47.251884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:55.780 [2024-07-16 00:13:47.251894] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:55.780 [2024-07-16 00:13:47.252704] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:55.780 [2024-07-16 00:13:47.253715] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:55.780 [2024-07-16 00:13:47.254718] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:55.780 [2024-07-16 00:13:47.255716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:55.780 [2024-07-16 00:13:47.255779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:55.780 [2024-07-16 00:13:47.256734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:55.780 [2024-07-16 00:13:47.256755] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:55.780 [2024-07-16 00:13:47.256764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.256787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:55.780 [2024-07-16 00:13:47.256804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.256825] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:55.780 [2024-07-16 00:13:47.256835] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.780 [2024-07-16 00:13:47.256853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.780 [2024-07-16 00:13:47.260894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:55.780 [2024-07-16 00:13:47.260918] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:55.780 [2024-07-16 00:13:47.260927] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:55.780 [2024-07-16 00:13:47.260935] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:55.780 [2024-07-16 00:13:47.260943] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:55.780 [2024-07-16 00:13:47.260951] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:55.780 [2024-07-16 00:13:47.260959] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:55.780 [2024-07-16 00:13:47.260967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.260985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.261006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:55.780 [2024-07-16 00:13:47.268887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:55.780 [2024-07-16 00:13:47.268911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.780 [2024-07-16 00:13:47.268940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.780 [2024-07-16 00:13:47.268952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.780 [2024-07-16 00:13:47.268964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.780 [2024-07-16 00:13:47.268973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.268990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.269006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:55.780 [2024-07-16 00:13:47.276899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:55.780 [2024-07-16 00:13:47.276927] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:55.780 [2024-07-16 00:13:47.276936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.276952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.276963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.276977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:55.780 [2024-07-16 00:13:47.284888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:55.780 [2024-07-16 00:13:47.284973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.284991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.285004] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:55.780 [2024-07-16 00:13:47.285012] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:55.780 [2024-07-16 00:13:47.285022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:55.780 [2024-07-16 00:13:47.292886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:55.780 [2024-07-16 00:13:47.292910] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:55.780 [2024-07-16 00:13:47.292933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.292949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.292962] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:55.780 [2024-07-16 00:13:47.292970] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.780 [2024-07-16 00:13:47.292980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.780 [2024-07-16 00:13:47.300886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:55.780 [2024-07-16 00:13:47.300913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.300931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:55.780 [2024-07-16 00:13:47.300945] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:55.780 [2024-07-16 00:13:47.300953] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.781 [2024-07-16 00:13:47.300962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.311890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.311912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:55.781 [2024-07-16 00:13:47.311933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:55.781 [2024-07-16 00:13:47.311948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:55.781 [2024-07-16 00:13:47.311959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:09:55.781 [2024-07-16 00:13:47.311968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:55.781 [2024-07-16 00:13:47.311976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:55.781 [2024-07-16 00:13:47.311984] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:55.781 [2024-07-16 00:13:47.311992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:55.781 [2024-07-16 00:13:47.312000] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:55.781 [2024-07-16 00:13:47.312026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.319886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.319913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.327889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.327917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.335889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.335914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.343889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.343920] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:55.781 [2024-07-16 00:13:47.343931] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:55.781 [2024-07-16 00:13:47.343938] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:55.781 [2024-07-16 00:13:47.343944] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:55.781 [2024-07-16 00:13:47.343953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:55.781 [2024-07-16 00:13:47.343965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:55.781 [2024-07-16 00:13:47.343974] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:55.781 [2024-07-16 00:13:47.343983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.343994] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:55.781 [2024-07-16 00:13:47.344002] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.781 [2024-07-16 00:13:47.344010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.344023] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:55.781 [2024-07-16 00:13:47.344030] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:55.781 [2024-07-16 00:13:47.344039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:55.781 [2024-07-16 00:13:47.351889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.351916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.351934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:55.781 [2024-07-16 00:13:47.351946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:55.781 ===================================================== 00:09:55.781 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:55.781 ===================================================== 00:09:55.781 Controller Capabilities/Features 00:09:55.781 ================================ 00:09:55.781 Vendor ID: 4e58 00:09:55.781 Subsystem Vendor ID: 4e58 00:09:55.781 Serial Number: SPDK2 00:09:55.781 Model Number: SPDK bdev Controller 00:09:55.781 Firmware Version: 24.09 00:09:55.781 Recommended Arb Burst: 6 00:09:55.781 IEEE OUI Identifier: 8d 6b 50 00:09:55.781 Multi-path I/O 00:09:55.781 May have multiple subsystem ports: Yes 00:09:55.781 May have multiple controllers: Yes 00:09:55.781 Associated with SR-IOV VF: No 00:09:55.781 Max Data Transfer Size: 131072 00:09:55.781 Max Number of Namespaces: 32 00:09:55.781 Max Number of I/O Queues: 127 00:09:55.781 NVMe Specification Version (VS): 1.3 00:09:55.781 NVMe Specification Version (Identify): 1.3 00:09:55.781 Maximum Queue Entries: 256 00:09:55.781 Contiguous Queues Required: Yes 00:09:55.781 Arbitration Mechanisms Supported 00:09:55.781 Weighted Round Robin: Not Supported 00:09:55.781 Vendor Specific: Not Supported 00:09:55.781 Reset Timeout: 15000 ms 00:09:55.781 Doorbell Stride: 4 bytes 00:09:55.781 NVM Subsystem Reset: Not Supported 00:09:55.781 Command Sets Supported 00:09:55.781 NVM Command Set: Supported 00:09:55.781 Boot Partition: Not Supported 00:09:55.781 Memory Page Size Minimum: 4096 bytes 00:09:55.781 Memory Page Size Maximum: 4096 bytes 00:09:55.781 Persistent Memory Region: Not Supported 00:09:55.781 Optional Asynchronous Events Supported 00:09:55.781 Namespace Attribute Notices: Supported 00:09:55.781 Firmware Activation Notices: Not Supported 00:09:55.781 ANA Change Notices: Not Supported 00:09:55.781 PLE Aggregate Log Change Notices: Not Supported 00:09:55.781 LBA Status Info Alert Notices: Not Supported 00:09:55.781 EGE Aggregate Log Change Notices: Not Supported 00:09:55.781 Normal NVM Subsystem Shutdown event: Not Supported 00:09:55.781 Zone Descriptor Change Notices: Not Supported 00:09:55.781 Discovery Log Change Notices: Not Supported 00:09:55.781 Controller Attributes 00:09:55.781 128-bit Host Identifier: Supported 00:09:55.781 Non-Operational Permissive Mode: Not Supported 00:09:55.781 NVM Sets: Not Supported 00:09:55.781 Read Recovery Levels: Not Supported 00:09:55.781 Endurance Groups: Not Supported 00:09:55.781 Predictable Latency Mode: Not Supported 00:09:55.781 Traffic Based Keep ALive: Not Supported 00:09:55.781 Namespace Granularity: Not Supported 00:09:55.781 SQ Associations: Not Supported 00:09:55.781 UUID List: Not Supported 00:09:55.781 Multi-Domain Subsystem: Not Supported 00:09:55.781 Fixed Capacity Management: Not Supported 00:09:55.781 Variable Capacity Management: Not Supported 00:09:55.781 Delete Endurance Group: Not Supported 00:09:55.781 Delete NVM Set: Not Supported 00:09:55.781 Extended LBA Formats Supported: Not Supported 00:09:55.781 Flexible Data Placement Supported: Not Supported 00:09:55.781 00:09:55.781 Controller Memory Buffer Support 00:09:55.781 ================================ 00:09:55.781 Supported: No 00:09:55.781 00:09:55.781 Persistent Memory Region Support 00:09:55.781 ================================ 00:09:55.781 Supported: No 00:09:55.781 00:09:55.781 Admin Command Set Attributes 00:09:55.781 ============================ 00:09:55.781 Security Send/Receive: Not Supported 00:09:55.781 Format NVM: Not Supported 00:09:55.781 Firmware Activate/Download: Not Supported 00:09:55.781 Namespace Management: Not Supported 00:09:55.781 Device Self-Test: Not Supported 00:09:55.781 Directives: Not Supported 00:09:55.781 NVMe-MI: Not Supported 00:09:55.781 Virtualization Management: Not Supported 00:09:55.781 Doorbell Buffer Config: Not Supported 00:09:55.781 Get LBA Status Capability: Not Supported 00:09:55.781 Command & Feature Lockdown Capability: Not Supported 00:09:55.781 Abort Command Limit: 4 00:09:55.781 Async Event Request Limit: 4 00:09:55.781 Number of Firmware Slots: N/A 00:09:55.781 Firmware Slot 1 Read-Only: N/A 00:09:55.781 Firmware Activation Without Reset: N/A 00:09:55.781 Multiple Update Detection Support: N/A 00:09:55.781 Firmware Update Granularity: No Information Provided 00:09:55.781 Per-Namespace SMART Log: No 00:09:55.781 Asymmetric Namespace Access Log Page: Not Supported 00:09:55.781 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:55.781 Command Effects Log Page: Supported 00:09:55.781 Get Log Page Extended Data: Supported 00:09:55.781 Telemetry Log Pages: Not Supported 00:09:55.781 Persistent Event Log Pages: Not Supported 00:09:55.781 Supported Log Pages Log Page: May Support 00:09:55.781 Commands Supported & Effects Log Page: Not Supported 00:09:55.781 Feature Identifiers & Effects Log Page:May Support 00:09:55.781 NVMe-MI Commands & Effects Log Page: May Support 00:09:55.781 Data Area 4 for Telemetry Log: Not Supported 00:09:55.781 Error Log Page Entries Supported: 128 00:09:55.781 Keep Alive: Supported 00:09:55.781 Keep Alive Granularity: 10000 ms 00:09:55.781 00:09:55.781 NVM Command Set Attributes 00:09:55.781 ========================== 00:09:55.781 Submission Queue Entry Size 00:09:55.781 Max: 64 00:09:55.781 Min: 64 00:09:55.781 Completion Queue Entry Size 00:09:55.781 Max: 16 00:09:55.781 Min: 16 00:09:55.781 Number of Namespaces: 32 00:09:55.781 Compare Command: Supported 00:09:55.781 Write Uncorrectable Command: Not Supported 00:09:55.781 Dataset Management Command: Supported 00:09:55.781 Write Zeroes Command: Supported 00:09:55.781 Set Features Save Field: Not Supported 00:09:55.782 Reservations: Not Supported 00:09:55.782 Timestamp: Not Supported 00:09:55.782 Copy: Supported 00:09:55.782 Volatile Write Cache: Present 00:09:55.782 Atomic Write Unit (Normal): 1 00:09:55.782 Atomic Write Unit (PFail): 1 00:09:55.782 Atomic Compare & Write Unit: 1 00:09:55.782 Fused Compare & Write: Supported 00:09:55.782 Scatter-Gather List 00:09:55.782 SGL Command Set: Supported (Dword aligned) 00:09:55.782 SGL Keyed: Not Supported 00:09:55.782 SGL Bit Bucket Descriptor: Not Supported 00:09:55.782 SGL Metadata Pointer: Not Supported 00:09:55.782 Oversized SGL: Not Supported 00:09:55.782 SGL Metadata Address: Not Supported 00:09:55.782 SGL Offset: Not Supported 00:09:55.782 Transport SGL Data Block: Not Supported 00:09:55.782 Replay Protected Memory Block: Not Supported 00:09:55.782 00:09:55.782 Firmware Slot Information 00:09:55.782 ========================= 00:09:55.782 Active slot: 1 00:09:55.782 Slot 1 Firmware Revision: 24.09 00:09:55.782 00:09:55.782 00:09:55.782 Commands Supported and Effects 00:09:55.782 ============================== 00:09:55.782 Admin Commands 00:09:55.782 -------------- 00:09:55.782 Get Log Page (02h): Supported 00:09:55.782 Identify (06h): Supported 00:09:55.782 Abort (08h): Supported 00:09:55.782 Set Features (09h): Supported 00:09:55.782 Get Features (0Ah): Supported 00:09:55.782 Asynchronous Event Request (0Ch): Supported 00:09:55.782 Keep Alive (18h): Supported 00:09:55.782 I/O Commands 00:09:55.782 ------------ 00:09:55.782 Flush (00h): Supported LBA-Change 00:09:55.782 Write (01h): Supported LBA-Change 00:09:55.782 Read (02h): Supported 00:09:55.782 Compare (05h): Supported 00:09:55.782 Write Zeroes (08h): Supported LBA-Change 00:09:55.782 Dataset Management (09h): Supported LBA-Change 00:09:55.782 Copy (19h): Supported LBA-Change 00:09:55.782 00:09:55.782 Error Log 00:09:55.782 ========= 00:09:55.782 00:09:55.782 Arbitration 00:09:55.782 =========== 00:09:55.782 Arbitration Burst: 1 00:09:55.782 00:09:55.782 Power Management 00:09:55.782 ================ 00:09:55.782 Number of Power States: 1 00:09:55.782 Current Power State: Power State #0 00:09:55.782 Power State #0: 00:09:55.782 Max Power: 0.00 W 00:09:55.782 Non-Operational State: Operational 00:09:55.782 Entry Latency: Not Reported 00:09:55.782 Exit Latency: Not Reported 00:09:55.782 Relative Read Throughput: 0 00:09:55.782 Relative Read Latency: 0 00:09:55.782 Relative Write Throughput: 0 00:09:55.782 Relative Write Latency: 0 00:09:55.782 Idle Power: Not Reported 00:09:55.782 Active Power: Not Reported 00:09:55.782 Non-Operational Permissive Mode: Not Supported 00:09:55.782 00:09:55.782 Health Information 00:09:55.782 ================== 00:09:55.782 Critical Warnings: 00:09:55.782 Available Spare Space: OK 00:09:55.782 Temperature: OK 00:09:55.782 Device Reliability: OK 00:09:55.782 Read Only: No 00:09:55.782 Volatile Memory Backup: OK 00:09:55.782 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:55.782 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:55.782 Available Spare: 0% 00:09:55.782 Available Sp[2024-07-16 00:13:47.352067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:55.782 [2024-07-16 00:13:47.359886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:55.782 [2024-07-16 00:13:47.359938] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:55.782 [2024-07-16 00:13:47.359957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.782 [2024-07-16 00:13:47.359969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.782 [2024-07-16 00:13:47.359979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.782 [2024-07-16 00:13:47.359993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.782 [2024-07-16 00:13:47.360075] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:55.782 [2024-07-16 00:13:47.360098] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:55.782 [2024-07-16 00:13:47.361074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:55.782 [2024-07-16 00:13:47.361146] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:55.782 [2024-07-16 00:13:47.361175] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:55.782 [2024-07-16 00:13:47.362086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:55.782 [2024-07-16 00:13:47.362111] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:55.782 [2024-07-16 00:13:47.362179] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:55.782 [2024-07-16 00:13:47.363353] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:56.041 are Threshold: 0% 00:09:56.041 Life Percentage Used: 0% 00:09:56.041 Data Units Read: 0 00:09:56.041 Data Units Written: 0 00:09:56.041 Host Read Commands: 0 00:09:56.041 Host Write Commands: 0 00:09:56.041 Controller Busy Time: 0 minutes 00:09:56.041 Power Cycles: 0 00:09:56.041 Power On Hours: 0 hours 00:09:56.041 Unsafe Shutdowns: 0 00:09:56.041 Unrecoverable Media Errors: 0 00:09:56.041 Lifetime Error Log Entries: 0 00:09:56.041 Warning Temperature Time: 0 minutes 00:09:56.041 Critical Temperature Time: 0 minutes 00:09:56.041 00:09:56.041 Number of Queues 00:09:56.041 ================ 00:09:56.041 Number of I/O Submission Queues: 127 00:09:56.041 Number of I/O Completion Queues: 127 00:09:56.041 00:09:56.041 Active Namespaces 00:09:56.041 ================= 00:09:56.041 Namespace ID:1 00:09:56.041 Error Recovery Timeout: Unlimited 00:09:56.041 Command Set Identifier: NVM (00h) 00:09:56.041 Deallocate: Supported 00:09:56.042 Deallocated/Unwritten Error: Not Supported 00:09:56.042 Deallocated Read Value: Unknown 00:09:56.042 Deallocate in Write Zeroes: Not Supported 00:09:56.042 Deallocated Guard Field: 0xFFFF 00:09:56.042 Flush: Supported 00:09:56.042 Reservation: Supported 00:09:56.042 Namespace Sharing Capabilities: Multiple Controllers 00:09:56.042 Size (in LBAs): 131072 (0GiB) 00:09:56.042 Capacity (in LBAs): 131072 (0GiB) 00:09:56.042 Utilization (in LBAs): 131072 (0GiB) 00:09:56.042 NGUID: F9F8F21740CF467795CC0FB6095333FB 00:09:56.042 UUID: f9f8f217-40cf-4677-95cc-0fb6095333fb 00:09:56.042 Thin Provisioning: Not Supported 00:09:56.042 Per-NS Atomic Units: Yes 00:09:56.042 Atomic Boundary Size (Normal): 0 00:09:56.042 Atomic Boundary Size (PFail): 0 00:09:56.042 Atomic Boundary Offset: 0 00:09:56.042 Maximum Single Source Range Length: 65535 00:09:56.042 Maximum Copy Length: 65535 00:09:56.042 Maximum Source Range Count: 1 00:09:56.042 NGUID/EUI64 Never Reused: No 00:09:56.042 Namespace Write Protected: No 00:09:56.042 Number of LBA Formats: 1 00:09:56.042 Current LBA Format: LBA Format #00 00:09:56.042 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:56.042 00:09:56.042 00:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:56.042 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.042 [2024-07-16 00:13:47.587677] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:01.316 Initializing NVMe Controllers 00:10:01.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:01.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:01.316 Initialization complete. Launching workers. 00:10:01.316 ======================================================== 00:10:01.316 Latency(us) 00:10:01.316 Device Information : IOPS MiB/s Average min max 00:10:01.316 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33835.73 132.17 3783.64 1189.99 9853.03 00:10:01.316 ======================================================== 00:10:01.316 Total : 33835.73 132.17 3783.64 1189.99 9853.03 00:10:01.316 00:10:01.316 [2024-07-16 00:13:52.694249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:01.316 00:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:01.316 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.573 [2024-07-16 00:13:52.935882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:06.910 Initializing NVMe Controllers 00:10:06.910 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:06.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:06.910 Initialization complete. Launching workers. 00:10:06.910 ======================================================== 00:10:06.910 Latency(us) 00:10:06.910 Device Information : IOPS MiB/s Average min max 00:10:06.910 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32124.08 125.48 3983.73 1219.73 7546.58 00:10:06.910 ======================================================== 00:10:06.910 Total : 32124.08 125.48 3983.73 1219.73 7546.58 00:10:06.910 00:10:06.910 [2024-07-16 00:13:57.958089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:06.910 00:13:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:06.911 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.911 [2024-07-16 00:13:58.170000] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:12.177 [2024-07-16 00:14:03.311036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:12.177 Initializing NVMe Controllers 00:10:12.177 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:12.177 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:12.177 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:12.177 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:12.177 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:12.177 Initialization complete. Launching workers. 00:10:12.177 Starting thread on core 2 00:10:12.177 Starting thread on core 3 00:10:12.177 Starting thread on core 1 00:10:12.177 00:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:12.177 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.177 [2024-07-16 00:14:03.619400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.467 [2024-07-16 00:14:06.688351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.467 Initializing NVMe Controllers 00:10:15.467 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.467 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:15.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:15.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:15.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:15.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:15.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:15.467 Initialization complete. Launching workers. 00:10:15.467 Starting thread on core 1 with urgent priority queue 00:10:15.467 Starting thread on core 2 with urgent priority queue 00:10:15.467 Starting thread on core 3 with urgent priority queue 00:10:15.467 Starting thread on core 0 with urgent priority queue 00:10:15.467 SPDK bdev Controller (SPDK2 ) core 0: 5438.00 IO/s 18.39 secs/100000 ios 00:10:15.467 SPDK bdev Controller (SPDK2 ) core 1: 5699.00 IO/s 17.55 secs/100000 ios 00:10:15.467 SPDK bdev Controller (SPDK2 ) core 2: 6318.00 IO/s 15.83 secs/100000 ios 00:10:15.467 SPDK bdev Controller (SPDK2 ) core 3: 5211.00 IO/s 19.19 secs/100000 ios 00:10:15.467 ======================================================== 00:10:15.467 00:10:15.467 00:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:15.467 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.467 [2024-07-16 00:14:06.989378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.467 Initializing NVMe Controllers 00:10:15.467 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.467 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.467 Namespace ID: 1 size: 0GB 00:10:15.467 Initialization complete. 00:10:15.467 INFO: using host memory buffer for IO 00:10:15.467 Hello world! 00:10:15.467 [2024-07-16 00:14:07.001482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.467 00:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:15.727 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.727 [2024-07-16 00:14:07.293243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.105 Initializing NVMe Controllers 00:10:17.105 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.105 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.105 Initialization complete. Launching workers. 00:10:17.105 submit (in ns) avg, min, max = 8571.2, 3487.8, 4029337.8 00:10:17.105 complete (in ns) avg, min, max = 26506.8, 2043.3, 4998542.2 00:10:17.105 00:10:17.105 Submit histogram 00:10:17.105 ================ 00:10:17.105 Range in us Cumulative Count 00:10:17.105 3.484 - 3.508: 0.3246% ( 42) 00:10:17.105 3.508 - 3.532: 1.3140% ( 128) 00:10:17.105 3.532 - 3.556: 3.2926% ( 256) 00:10:17.105 3.556 - 3.579: 8.1543% ( 629) 00:10:17.105 3.579 - 3.603: 16.0071% ( 1016) 00:10:17.105 3.603 - 3.627: 24.0609% ( 1042) 00:10:17.105 3.627 - 3.650: 32.5243% ( 1095) 00:10:17.105 3.650 - 3.674: 39.7125% ( 930) 00:10:17.105 3.674 - 3.698: 47.1711% ( 965) 00:10:17.105 3.698 - 3.721: 53.4781% ( 816) 00:10:17.105 3.721 - 3.745: 58.0847% ( 596) 00:10:17.105 3.745 - 3.769: 61.4392% ( 434) 00:10:17.105 3.769 - 3.793: 64.4535% ( 390) 00:10:17.105 3.793 - 3.816: 68.0244% ( 462) 00:10:17.105 3.816 - 3.840: 71.8890% ( 500) 00:10:17.105 3.840 - 3.864: 76.1710% ( 554) 00:10:17.105 3.864 - 3.887: 79.8578% ( 477) 00:10:17.105 3.887 - 3.911: 83.1736% ( 429) 00:10:17.105 3.911 - 3.935: 86.0025% ( 366) 00:10:17.105 3.935 - 3.959: 87.9966% ( 258) 00:10:17.105 3.959 - 3.982: 89.4420% ( 187) 00:10:17.105 3.982 - 4.006: 90.6168% ( 152) 00:10:17.105 4.006 - 4.030: 91.6370% ( 132) 00:10:17.105 4.030 - 4.053: 92.3868% ( 97) 00:10:17.105 4.053 - 4.077: 93.1056% ( 93) 00:10:17.105 4.077 - 4.101: 93.8321% ( 94) 00:10:17.105 4.101 - 4.124: 94.4350% ( 78) 00:10:17.105 4.124 - 4.148: 94.9915% ( 72) 00:10:17.105 4.148 - 4.172: 95.4166% ( 55) 00:10:17.105 4.172 - 4.196: 95.6949% ( 36) 00:10:17.105 4.196 - 4.219: 95.9267% ( 30) 00:10:17.105 4.219 - 4.243: 96.1509% ( 29) 00:10:17.105 4.243 - 4.267: 96.3518% ( 26) 00:10:17.105 4.267 - 4.290: 96.4910% ( 18) 00:10:17.105 4.290 - 4.314: 96.6146% ( 16) 00:10:17.105 4.314 - 4.338: 96.7151% ( 13) 00:10:17.105 4.338 - 4.361: 96.8851% ( 22) 00:10:17.105 4.361 - 4.385: 96.9856% ( 13) 00:10:17.105 4.385 - 4.409: 97.1170% ( 17) 00:10:17.105 4.409 - 4.433: 97.1943% ( 10) 00:10:17.105 4.433 - 4.456: 97.2175% ( 3) 00:10:17.105 4.456 - 4.480: 97.2871% ( 9) 00:10:17.105 4.480 - 4.504: 97.3566% ( 9) 00:10:17.105 4.504 - 4.527: 97.3644% ( 1) 00:10:17.105 4.527 - 4.551: 97.3953% ( 4) 00:10:17.105 4.551 - 4.575: 97.4107% ( 2) 00:10:17.105 4.599 - 4.622: 97.4339% ( 3) 00:10:17.105 4.646 - 4.670: 97.4571% ( 3) 00:10:17.105 4.693 - 4.717: 97.4880% ( 4) 00:10:17.105 4.717 - 4.741: 97.5344% ( 6) 00:10:17.105 4.741 - 4.764: 97.6040% ( 9) 00:10:17.105 4.764 - 4.788: 97.6503% ( 6) 00:10:17.105 4.788 - 4.812: 97.7044% ( 7) 00:10:17.105 4.812 - 4.836: 97.7199% ( 2) 00:10:17.105 4.836 - 4.859: 97.7740% ( 7) 00:10:17.105 4.859 - 4.883: 97.8436% ( 9) 00:10:17.105 4.883 - 4.907: 97.8745% ( 4) 00:10:17.105 4.907 - 4.930: 97.9440% ( 9) 00:10:17.105 4.930 - 4.954: 97.9595% ( 2) 00:10:17.105 4.954 - 4.978: 97.9981% ( 5) 00:10:17.105 4.978 - 5.001: 98.0677% ( 9) 00:10:17.105 5.001 - 5.025: 98.0909% ( 3) 00:10:17.105 5.025 - 5.049: 98.1295% ( 5) 00:10:17.105 5.049 - 5.073: 98.1605% ( 4) 00:10:17.105 5.073 - 5.096: 98.1914% ( 4) 00:10:17.105 5.096 - 5.120: 98.2068% ( 2) 00:10:17.105 5.144 - 5.167: 98.2223% ( 2) 00:10:17.106 5.167 - 5.191: 98.2455% ( 3) 00:10:17.106 5.191 - 5.215: 98.2532% ( 1) 00:10:17.106 5.286 - 5.310: 98.2609% ( 1) 00:10:17.106 5.357 - 5.381: 98.2687% ( 1) 00:10:17.106 5.381 - 5.404: 98.2764% ( 1) 00:10:17.106 5.428 - 5.452: 98.2919% ( 2) 00:10:17.106 5.452 - 5.476: 98.3073% ( 2) 00:10:17.106 5.523 - 5.547: 98.3150% ( 1) 00:10:17.106 5.618 - 5.641: 98.3228% ( 1) 00:10:17.106 5.760 - 5.784: 98.3305% ( 1) 00:10:17.106 5.879 - 5.902: 98.3382% ( 1) 00:10:17.106 6.116 - 6.163: 98.3537% ( 2) 00:10:17.106 6.447 - 6.495: 98.3691% ( 2) 00:10:17.106 6.590 - 6.637: 98.3769% ( 1) 00:10:17.106 6.732 - 6.779: 98.3923% ( 2) 00:10:17.106 6.779 - 6.827: 98.4078% ( 2) 00:10:17.106 6.827 - 6.874: 98.4155% ( 1) 00:10:17.106 6.874 - 6.921: 98.4310% ( 2) 00:10:17.106 6.921 - 6.969: 98.4464% ( 2) 00:10:17.106 6.969 - 7.016: 98.4696% ( 3) 00:10:17.106 7.016 - 7.064: 98.4774% ( 1) 00:10:17.106 7.111 - 7.159: 98.4928% ( 2) 00:10:17.106 7.159 - 7.206: 98.5160% ( 3) 00:10:17.106 7.206 - 7.253: 98.5237% ( 1) 00:10:17.106 7.348 - 7.396: 98.5469% ( 3) 00:10:17.106 7.443 - 7.490: 98.5624% ( 2) 00:10:17.106 7.538 - 7.585: 98.5701% ( 1) 00:10:17.106 7.633 - 7.680: 98.5778% ( 1) 00:10:17.106 7.727 - 7.775: 98.5856% ( 1) 00:10:17.106 7.822 - 7.870: 98.6165% ( 4) 00:10:17.106 7.870 - 7.917: 98.6242% ( 1) 00:10:17.106 7.917 - 7.964: 98.6397% ( 2) 00:10:17.106 7.964 - 8.012: 98.6474% ( 1) 00:10:17.106 8.107 - 8.154: 98.6629% ( 2) 00:10:17.106 8.154 - 8.201: 98.6706% ( 1) 00:10:17.106 8.201 - 8.249: 98.6783% ( 1) 00:10:17.106 8.249 - 8.296: 98.6938% ( 2) 00:10:17.106 8.344 - 8.391: 98.7015% ( 1) 00:10:17.106 8.391 - 8.439: 98.7170% ( 2) 00:10:17.106 8.439 - 8.486: 98.7247% ( 1) 00:10:17.106 8.486 - 8.533: 98.7324% ( 1) 00:10:17.106 8.533 - 8.581: 98.7401% ( 1) 00:10:17.106 8.628 - 8.676: 98.7479% ( 1) 00:10:17.106 9.055 - 9.102: 98.7556% ( 1) 00:10:17.106 9.292 - 9.339: 98.7633% ( 1) 00:10:17.106 9.576 - 9.624: 98.7711% ( 1) 00:10:17.106 9.719 - 9.766: 98.7788% ( 1) 00:10:17.106 9.861 - 9.908: 98.7942% ( 2) 00:10:17.106 10.098 - 10.145: 98.8020% ( 1) 00:10:17.106 10.145 - 10.193: 98.8097% ( 1) 00:10:17.106 10.193 - 10.240: 98.8174% ( 1) 00:10:17.106 10.240 - 10.287: 98.8252% ( 1) 00:10:17.106 10.524 - 10.572: 98.8329% ( 1) 00:10:17.106 10.856 - 10.904: 98.8406% ( 1) 00:10:17.106 10.999 - 11.046: 98.8484% ( 1) 00:10:17.106 11.046 - 11.093: 98.8561% ( 1) 00:10:17.106 11.330 - 11.378: 98.8638% ( 1) 00:10:17.106 11.567 - 11.615: 98.8715% ( 1) 00:10:17.106 11.757 - 11.804: 98.8793% ( 1) 00:10:17.106 12.136 - 12.231: 98.8870% ( 1) 00:10:17.106 12.421 - 12.516: 98.9025% ( 2) 00:10:17.106 12.516 - 12.610: 98.9102% ( 1) 00:10:17.106 12.705 - 12.800: 98.9256% ( 2) 00:10:17.106 12.895 - 12.990: 98.9334% ( 1) 00:10:17.106 13.084 - 13.179: 98.9411% ( 1) 00:10:17.106 13.653 - 13.748: 98.9566% ( 2) 00:10:17.106 13.748 - 13.843: 98.9643% ( 1) 00:10:17.106 14.317 - 14.412: 98.9720% ( 1) 00:10:17.106 14.412 - 14.507: 98.9797% ( 1) 00:10:17.106 14.981 - 15.076: 98.9875% ( 1) 00:10:17.106 15.170 - 15.265: 98.9952% ( 1) 00:10:17.106 15.455 - 15.550: 99.0029% ( 1) 00:10:17.106 17.067 - 17.161: 99.0107% ( 1) 00:10:17.106 17.161 - 17.256: 99.0184% ( 1) 00:10:17.106 17.256 - 17.351: 99.0261% ( 1) 00:10:17.106 17.446 - 17.541: 99.0339% ( 1) 00:10:17.106 17.541 - 17.636: 99.0648% ( 4) 00:10:17.106 17.636 - 17.730: 99.0880% ( 3) 00:10:17.106 17.730 - 17.825: 99.1421% ( 7) 00:10:17.106 17.825 - 17.920: 99.1884% ( 6) 00:10:17.106 17.920 - 18.015: 99.2348% ( 6) 00:10:17.106 18.015 - 18.110: 99.3198% ( 11) 00:10:17.106 18.110 - 18.204: 99.3507% ( 4) 00:10:17.106 18.204 - 18.299: 99.3817% ( 4) 00:10:17.106 18.299 - 18.394: 99.4435% ( 8) 00:10:17.106 18.394 - 18.489: 99.4821% ( 5) 00:10:17.106 18.489 - 18.584: 99.5749% ( 12) 00:10:17.106 18.584 - 18.679: 99.6445% ( 9) 00:10:17.106 18.679 - 18.773: 99.6986% ( 7) 00:10:17.106 18.773 - 18.868: 99.7295% ( 4) 00:10:17.106 18.868 - 18.963: 99.7372% ( 1) 00:10:17.106 18.963 - 19.058: 99.7604% ( 3) 00:10:17.106 19.058 - 19.153: 99.7759% ( 2) 00:10:17.106 19.342 - 19.437: 99.7836% ( 1) 00:10:17.106 19.437 - 19.532: 99.7913% ( 1) 00:10:17.106 19.627 - 19.721: 99.8068% ( 2) 00:10:17.106 19.911 - 20.006: 99.8222% ( 2) 00:10:17.106 22.281 - 22.376: 99.8300% ( 1) 00:10:17.106 23.040 - 23.135: 99.8377% ( 1) 00:10:17.106 24.462 - 24.652: 99.8454% ( 1) 00:10:17.106 28.824 - 29.013: 99.8531% ( 1) 00:10:17.106 29.582 - 29.772: 99.8609% ( 1) 00:10:17.106 31.668 - 31.858: 99.8686% ( 1) 00:10:17.106 32.237 - 32.427: 99.8763% ( 1) 00:10:17.106 33.375 - 33.564: 99.8841% ( 1) 00:10:17.106 3980.705 - 4004.978: 99.9614% ( 10) 00:10:17.106 4004.978 - 4029.250: 99.9923% ( 4) 00:10:17.106 4029.250 - 4053.523: 100.0000% ( 1) 00:10:17.106 00:10:17.106 Complete histogram 00:10:17.106 ================== 00:10:17.106 Range in us Cumulative Count 00:10:17.106 2.039 - 2.050: 0.9352% ( 121) 00:10:17.106 2.050 - 2.062: 13.1009% ( 1574) 00:10:17.106 2.062 - 2.074: 17.3520% ( 550) 00:10:17.106 2.074 - 2.086: 30.5302% ( 1705) 00:10:17.106 2.086 - 2.098: 56.3225% ( 3337) 00:10:17.106 2.098 - 2.110: 60.8518% ( 586) 00:10:17.106 2.110 - 2.121: 64.1444% ( 426) 00:10:17.106 2.121 - 2.133: 67.7616% ( 468) 00:10:17.106 2.133 - 2.145: 68.7896% ( 133) 00:10:17.106 2.145 - 2.157: 75.1275% ( 820) 00:10:17.106 2.157 - 2.169: 81.0481% ( 766) 00:10:17.106 2.169 - 2.181: 82.1147% ( 138) 00:10:17.106 2.181 - 2.193: 83.2895% ( 152) 00:10:17.106 2.193 - 2.204: 84.8045% ( 196) 00:10:17.106 2.204 - 2.216: 85.7629% ( 124) 00:10:17.106 2.216 - 2.228: 89.0400% ( 424) 00:10:17.106 2.228 - 2.240: 91.7221% ( 347) 00:10:17.106 2.240 - 2.252: 93.2602% ( 199) 00:10:17.106 2.252 - 2.264: 93.9867% ( 94) 00:10:17.106 2.264 - 2.276: 94.2340% ( 32) 00:10:17.106 2.276 - 2.287: 94.6205% ( 50) 00:10:17.106 2.287 - 2.299: 94.9065% ( 37) 00:10:17.106 2.299 - 2.311: 95.0070% ( 13) 00:10:17.106 2.311 - 2.323: 95.3780% ( 48) 00:10:17.106 2.323 - 2.335: 95.5635% ( 24) 00:10:17.106 2.335 - 2.347: 95.7180% ( 20) 00:10:17.106 2.347 - 2.359: 95.9267% ( 27) 00:10:17.106 2.359 - 2.370: 96.2359% ( 40) 00:10:17.106 2.370 - 2.382: 96.5837% ( 45) 00:10:17.106 2.382 - 2.394: 96.9779% ( 51) 00:10:17.106 2.394 - 2.406: 97.2948% ( 41) 00:10:17.106 2.406 - 2.418: 97.5112% ( 28) 00:10:17.106 2.418 - 2.430: 97.6812% ( 22) 00:10:17.106 2.430 - 2.441: 97.7972% ( 15) 00:10:17.106 2.441 - 2.453: 97.9054% ( 14) 00:10:17.106 2.453 - 2.465: 97.9981% ( 12) 00:10:17.106 2.465 - 2.477: 98.0986% ( 13) 00:10:17.106 2.477 - 2.489: 98.1991% ( 13) 00:10:17.106 2.489 - 2.501: 98.2687% ( 9) 00:10:17.106 2.501 - 2.513: 98.3073% ( 5) 00:10:17.106 2.513 - 2.524: 98.3228% ( 2) 00:10:17.106 2.524 - 2.536: 98.3460% ( 3) 00:10:17.106 2.536 - 2.548: 98.3691% ( 3) 00:10:17.106 2.548 - 2.560: 98.3846% ( 2) 00:10:17.106 2.560 - 2.572: 98.4078% ( 3) 00:10:17.106 2.596 - 2.607: 98.4155% ( 1) 00:10:17.106 2.619 - 2.631: 98.4232% ( 1) 00:10:17.106 2.631 - 2.643: 98.4310% ( 1) 00:10:17.106 2.643 - 2.655: 98.4464% ( 2) 00:10:17.106 2.655 - 2.667: 98.4696% ( 3) 00:10:17.106 2.714 - 2.726: 98.4851% ( 2) 00:10:17.106 2.750 - 2.761: 98.4928% ( 1) 00:10:17.106 3.224 - 3.247: 98.5005% ( 1) 00:10:17.106 3.247 - 3.271: 98.5083% ( 1) 00:10:17.106 3.342 - 3.366: 98.5160% ( 1) 00:10:17.106 3.366 - 3.390: 98.5237% ( 1) 00:10:17.106 3.413 - 3.437: 98.5315% ( 1) 00:10:17.106 3.484 - 3.508: 98.5392% ( 1) 00:10:17.106 3.556 - 3.579: 98.5546% ( 2) 00:10:17.106 3.579 - 3.603: 98.5701% ( 2) 00:10:17.106 3.627 - 3.650: 98.5778% ( 1) 00:10:17.106 3.674 - 3.698: 98.6087% ( 4) 00:10:17.106 3.721 - 3.745: 98.6242% ( 2) 00:10:17.106 3.745 - 3.769: 98.6397% ( 2) 00:10:17.106 3.816 - 3.840: 98.6474% ( 1) 00:10:17.106 3.840 - 3.864: 98.6551% ( 1) 00:10:17.106 3.935 - 3.959: 98.6629% ( 1) 00:10:17.106 3.959 - 3.982: 98.6706% ( 1) 00:10:17.106 4.006 - 4.030: 98.6783% ( 1) 00:10:17.106 4.053 - 4.077: 98.6860% ( 1) 00:10:17.106 4.077 - 4.101: 98.6938% ( 1) 00:10:17.106 4.670 - 4.693: 98.7015% ( 1) 00:10:17.106 4.907 - 4.930: 98.7092% ( 1) 00:10:17.106 5.144 - 5.167: 98.7170% ( 1) 00:10:17.106 5.191 - 5.215: 98.7247% ( 1) 00:10:17.106 5.333 - 5.357: 98.7324% ( 1) 00:10:17.106 5.428 - 5.452: 9[2024-07-16 00:14:08.398616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.106 8.7401% ( 1) 00:10:17.106 5.452 - 5.476: 98.7479% ( 1) 00:10:17.106 5.499 - 5.523: 98.7556% ( 1) 00:10:17.106 5.523 - 5.547: 98.7633% ( 1) 00:10:17.106 5.594 - 5.618: 98.7711% ( 1) 00:10:17.106 5.879 - 5.902: 98.7788% ( 1) 00:10:17.106 6.210 - 6.258: 98.7865% ( 1) 00:10:17.106 6.353 - 6.400: 98.7942% ( 1) 00:10:17.106 6.400 - 6.447: 98.8020% ( 1) 00:10:17.106 6.495 - 6.542: 98.8097% ( 1) 00:10:17.107 6.590 - 6.637: 98.8174% ( 1) 00:10:17.107 6.637 - 6.684: 98.8329% ( 2) 00:10:17.107 6.684 - 6.732: 98.8406% ( 1) 00:10:17.107 6.921 - 6.969: 98.8484% ( 1) 00:10:17.107 6.969 - 7.016: 98.8561% ( 1) 00:10:17.107 7.206 - 7.253: 98.8638% ( 1) 00:10:17.107 7.253 - 7.301: 98.8715% ( 1) 00:10:17.107 8.344 - 8.391: 98.8793% ( 1) 00:10:17.107 8.391 - 8.439: 98.8870% ( 1) 00:10:17.107 10.430 - 10.477: 98.8947% ( 1) 00:10:17.107 15.550 - 15.644: 98.9179% ( 3) 00:10:17.107 15.644 - 15.739: 98.9334% ( 2) 00:10:17.107 15.739 - 15.834: 98.9643% ( 4) 00:10:17.107 15.834 - 15.929: 98.9875% ( 3) 00:10:17.107 16.024 - 16.119: 99.0261% ( 5) 00:10:17.107 16.119 - 16.213: 99.0416% ( 2) 00:10:17.107 16.213 - 16.308: 99.0648% ( 3) 00:10:17.107 16.308 - 16.403: 99.1189% ( 7) 00:10:17.107 16.403 - 16.498: 99.1575% ( 5) 00:10:17.107 16.498 - 16.593: 99.1807% ( 3) 00:10:17.107 16.593 - 16.687: 99.2116% ( 4) 00:10:17.107 16.687 - 16.782: 99.2348% ( 3) 00:10:17.107 16.782 - 16.877: 99.2425% ( 1) 00:10:17.107 16.877 - 16.972: 99.2812% ( 5) 00:10:17.107 16.972 - 17.067: 99.3198% ( 5) 00:10:17.107 17.067 - 17.161: 99.3276% ( 1) 00:10:17.107 17.256 - 17.351: 99.3353% ( 1) 00:10:17.107 17.730 - 17.825: 99.3430% ( 1) 00:10:17.107 17.920 - 18.015: 99.3585% ( 2) 00:10:17.107 18.110 - 18.204: 99.3662% ( 1) 00:10:17.107 18.299 - 18.394: 99.3817% ( 2) 00:10:17.107 18.394 - 18.489: 99.3894% ( 1) 00:10:17.107 2111.716 - 2123.852: 99.3971% ( 1) 00:10:17.107 3228.255 - 3252.527: 99.4049% ( 1) 00:10:17.107 3980.705 - 4004.978: 99.8686% ( 60) 00:10:17.107 4004.978 - 4029.250: 99.9923% ( 16) 00:10:17.107 4975.881 - 5000.154: 100.0000% ( 1) 00:10:17.107 00:10:17.107 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:17.107 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:17.107 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:17.107 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:17.107 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:17.365 [ 00:10:17.365 { 00:10:17.365 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:17.365 "subtype": "Discovery", 00:10:17.365 "listen_addresses": [], 00:10:17.365 "allow_any_host": true, 00:10:17.365 "hosts": [] 00:10:17.365 }, 00:10:17.365 { 00:10:17.365 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:17.365 "subtype": "NVMe", 00:10:17.365 "listen_addresses": [ 00:10:17.365 { 00:10:17.365 "trtype": "VFIOUSER", 00:10:17.365 "adrfam": "IPv4", 00:10:17.365 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:17.365 "trsvcid": "0" 00:10:17.365 } 00:10:17.365 ], 00:10:17.365 "allow_any_host": true, 00:10:17.365 "hosts": [], 00:10:17.365 "serial_number": "SPDK1", 00:10:17.365 "model_number": "SPDK bdev Controller", 00:10:17.365 "max_namespaces": 32, 00:10:17.365 "min_cntlid": 1, 00:10:17.365 "max_cntlid": 65519, 00:10:17.365 "namespaces": [ 00:10:17.365 { 00:10:17.365 "nsid": 1, 00:10:17.365 "bdev_name": "Malloc1", 00:10:17.365 "name": "Malloc1", 00:10:17.365 "nguid": "B097F370A36947969180860786327B30", 00:10:17.365 "uuid": "b097f370-a369-4796-9180-860786327b30" 00:10:17.365 }, 00:10:17.365 { 00:10:17.365 "nsid": 2, 00:10:17.365 "bdev_name": "Malloc3", 00:10:17.365 "name": "Malloc3", 00:10:17.365 "nguid": "CD87A798FB9645FB8797680657FDA17B", 00:10:17.365 "uuid": "cd87a798-fb96-45fb-8797-680657fda17b" 00:10:17.365 } 00:10:17.365 ] 00:10:17.365 }, 00:10:17.365 { 00:10:17.365 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:17.365 "subtype": "NVMe", 00:10:17.365 "listen_addresses": [ 00:10:17.365 { 00:10:17.365 "trtype": "VFIOUSER", 00:10:17.365 "adrfam": "IPv4", 00:10:17.365 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:17.365 "trsvcid": "0" 00:10:17.365 } 00:10:17.365 ], 00:10:17.365 "allow_any_host": true, 00:10:17.365 "hosts": [], 00:10:17.365 "serial_number": "SPDK2", 00:10:17.365 "model_number": "SPDK bdev Controller", 00:10:17.365 "max_namespaces": 32, 00:10:17.365 "min_cntlid": 1, 00:10:17.365 "max_cntlid": 65519, 00:10:17.365 "namespaces": [ 00:10:17.365 { 00:10:17.365 "nsid": 1, 00:10:17.365 "bdev_name": "Malloc2", 00:10:17.365 "name": "Malloc2", 00:10:17.365 "nguid": "F9F8F21740CF467795CC0FB6095333FB", 00:10:17.365 "uuid": "f9f8f217-40cf-4677-95cc-0fb6095333fb" 00:10:17.366 } 00:10:17.366 ] 00:10:17.366 } 00:10:17.366 ] 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2232404 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:17.366 00:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:17.366 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.366 [2024-07-16 00:14:08.904433] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.623 Malloc4 00:10:17.623 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:17.880 [2024-07-16 00:14:09.276253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.880 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:17.880 Asynchronous Event Request test 00:10:17.880 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.880 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.880 Registering asynchronous event callbacks... 00:10:17.880 Starting namespace attribute notice tests for all controllers... 00:10:17.880 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:17.880 aer_cb - Changed Namespace 00:10:17.880 Cleaning up... 00:10:18.138 [ 00:10:18.138 { 00:10:18.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:18.138 "subtype": "Discovery", 00:10:18.138 "listen_addresses": [], 00:10:18.138 "allow_any_host": true, 00:10:18.138 "hosts": [] 00:10:18.138 }, 00:10:18.138 { 00:10:18.138 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:18.138 "subtype": "NVMe", 00:10:18.138 "listen_addresses": [ 00:10:18.138 { 00:10:18.138 "trtype": "VFIOUSER", 00:10:18.138 "adrfam": "IPv4", 00:10:18.138 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:18.138 "trsvcid": "0" 00:10:18.138 } 00:10:18.138 ], 00:10:18.138 "allow_any_host": true, 00:10:18.138 "hosts": [], 00:10:18.138 "serial_number": "SPDK1", 00:10:18.138 "model_number": "SPDK bdev Controller", 00:10:18.138 "max_namespaces": 32, 00:10:18.138 "min_cntlid": 1, 00:10:18.138 "max_cntlid": 65519, 00:10:18.138 "namespaces": [ 00:10:18.138 { 00:10:18.138 "nsid": 1, 00:10:18.138 "bdev_name": "Malloc1", 00:10:18.138 "name": "Malloc1", 00:10:18.138 "nguid": "B097F370A36947969180860786327B30", 00:10:18.138 "uuid": "b097f370-a369-4796-9180-860786327b30" 00:10:18.138 }, 00:10:18.138 { 00:10:18.138 "nsid": 2, 00:10:18.138 "bdev_name": "Malloc3", 00:10:18.138 "name": "Malloc3", 00:10:18.138 "nguid": "CD87A798FB9645FB8797680657FDA17B", 00:10:18.138 "uuid": "cd87a798-fb96-45fb-8797-680657fda17b" 00:10:18.138 } 00:10:18.138 ] 00:10:18.138 }, 00:10:18.138 { 00:10:18.138 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:18.138 "subtype": "NVMe", 00:10:18.138 "listen_addresses": [ 00:10:18.138 { 00:10:18.138 "trtype": "VFIOUSER", 00:10:18.138 "adrfam": "IPv4", 00:10:18.138 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:18.138 "trsvcid": "0" 00:10:18.138 } 00:10:18.138 ], 00:10:18.138 "allow_any_host": true, 00:10:18.138 "hosts": [], 00:10:18.138 "serial_number": "SPDK2", 00:10:18.138 "model_number": "SPDK bdev Controller", 00:10:18.138 "max_namespaces": 32, 00:10:18.138 "min_cntlid": 1, 00:10:18.138 "max_cntlid": 65519, 00:10:18.138 "namespaces": [ 00:10:18.138 { 00:10:18.138 "nsid": 1, 00:10:18.138 "bdev_name": "Malloc2", 00:10:18.138 "name": "Malloc2", 00:10:18.138 "nguid": "F9F8F21740CF467795CC0FB6095333FB", 00:10:18.138 "uuid": "f9f8f217-40cf-4677-95cc-0fb6095333fb" 00:10:18.138 }, 00:10:18.138 { 00:10:18.138 "nsid": 2, 00:10:18.138 "bdev_name": "Malloc4", 00:10:18.138 "name": "Malloc4", 00:10:18.138 "nguid": "DA97F958655F4A7AA7C6C74EEF63C808", 00:10:18.138 "uuid": "da97f958-655f-4a7a-a7c6-c74eef63c808" 00:10:18.138 } 00:10:18.138 ] 00:10:18.138 } 00:10:18.138 ] 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2232404 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2226804 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2226804 ']' 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2226804 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2226804 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2226804' 00:10:18.138 killing process with pid 2226804 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2226804 00:10:18.138 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2226804 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2232554 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2232554' 00:10:18.396 Process pid: 2232554 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2232554 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2232554 ']' 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.396 00:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:18.654 [2024-07-16 00:14:10.032236] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:18.654 [2024-07-16 00:14:10.033456] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:10:18.654 [2024-07-16 00:14:10.033534] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.654 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.654 [2024-07-16 00:14:10.098515] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.654 [2024-07-16 00:14:10.214842] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.654 [2024-07-16 00:14:10.214912] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.654 [2024-07-16 00:14:10.214938] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.654 [2024-07-16 00:14:10.214951] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.654 [2024-07-16 00:14:10.214962] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.654 [2024-07-16 00:14:10.215046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.654 [2024-07-16 00:14:10.215120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.654 [2024-07-16 00:14:10.215201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.654 [2024-07-16 00:14:10.215203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.912 [2024-07-16 00:14:10.321627] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:18.912 [2024-07-16 00:14:10.321847] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:18.912 [2024-07-16 00:14:10.322191] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:18.912 [2024-07-16 00:14:10.322805] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:18.912 [2024-07-16 00:14:10.323067] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:19.478 00:14:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.478 00:14:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:19.478 00:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:20.411 00:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:20.668 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:20.668 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:20.668 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:20.668 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:20.668 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:20.926 Malloc1 00:10:20.926 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:21.185 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:21.442 00:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:21.700 00:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:21.700 00:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:21.700 00:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:21.958 Malloc2 00:10:21.958 00:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:22.215 00:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:22.473 00:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2232554 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2232554 ']' 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2232554 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2232554 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2232554' 00:10:22.732 killing process with pid 2232554 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2232554 00:10:22.732 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2232554 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:23.296 00:10:23.296 real 0m53.251s 00:10:23.296 user 3m29.842s 00:10:23.296 sys 0m4.665s 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:23.296 ************************************ 00:10:23.296 END TEST nvmf_vfio_user 00:10:23.296 ************************************ 00:10:23.296 00:14:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:23.296 00:14:14 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:23.296 00:14:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.296 00:14:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.296 00:14:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.296 ************************************ 00:10:23.296 START TEST nvmf_vfio_user_nvme_compliance 00:10:23.296 ************************************ 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:23.296 * Looking for test storage... 00:10:23.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.296 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2233160 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2233160' 00:10:23.297 Process pid: 2233160 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2233160 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2233160 ']' 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.297 00:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:23.297 [2024-07-16 00:14:14.779309] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:10:23.297 [2024-07-16 00:14:14.779414] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.297 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.297 [2024-07-16 00:14:14.837312] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.555 [2024-07-16 00:14:14.944351] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.555 [2024-07-16 00:14:14.944405] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.555 [2024-07-16 00:14:14.944434] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.555 [2024-07-16 00:14:14.944446] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.556 [2024-07-16 00:14:14.944455] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.556 [2024-07-16 00:14:14.944604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.556 [2024-07-16 00:14:14.944668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.556 [2024-07-16 00:14:14.944671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.556 00:14:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.556 00:14:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:23.556 00:14:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.531 malloc0 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.531 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.812 00:14:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:24.812 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.812 00:10:24.812 00:10:24.812 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.812 http://cunit.sourceforge.net/ 00:10:24.812 00:10:24.812 00:10:24.812 Suite: nvme_compliance 00:10:24.812 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-16 00:14:16.296423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.812 [2024-07-16 00:14:16.297931] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:24.812 [2024-07-16 00:14:16.297956] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:24.812 [2024-07-16 00:14:16.297970] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:24.812 [2024-07-16 00:14:16.299444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.812 passed 00:10:24.812 Test: admin_identify_ctrlr_verify_fused ...[2024-07-16 00:14:16.387060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.812 [2024-07-16 00:14:16.390081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.812 passed 00:10:25.071 Test: admin_identify_ns ...[2024-07-16 00:14:16.474360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.071 [2024-07-16 00:14:16.537907] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:25.071 [2024-07-16 00:14:16.545909] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:25.071 [2024-07-16 00:14:16.567010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.071 passed 00:10:25.071 Test: admin_get_features_mandatory_features ...[2024-07-16 00:14:16.647698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.071 [2024-07-16 00:14:16.652730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.330 passed 00:10:25.330 Test: admin_get_features_optional_features ...[2024-07-16 00:14:16.738367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.330 [2024-07-16 00:14:16.741390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.330 passed 00:10:25.330 Test: admin_set_features_number_of_queues ...[2024-07-16 00:14:16.822433] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.330 [2024-07-16 00:14:16.932004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.589 passed 00:10:25.589 Test: admin_get_log_page_mandatory_logs ...[2024-07-16 00:14:17.015947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.589 [2024-07-16 00:14:17.018976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.589 passed 00:10:25.589 Test: admin_get_log_page_with_lpo ...[2024-07-16 00:14:17.100099] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.589 [2024-07-16 00:14:17.168909] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:25.589 [2024-07-16 00:14:17.181992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.846 passed 00:10:25.846 Test: fabric_property_get ...[2024-07-16 00:14:17.265460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.846 [2024-07-16 00:14:17.266721] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:25.846 [2024-07-16 00:14:17.268461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.846 passed 00:10:25.846 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-16 00:14:17.354029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.846 [2024-07-16 00:14:17.355346] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:25.846 [2024-07-16 00:14:17.357049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.846 passed 00:10:25.846 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-16 00:14:17.439388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.104 [2024-07-16 00:14:17.526892] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:26.104 [2024-07-16 00:14:17.542904] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:26.104 [2024-07-16 00:14:17.547974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.104 passed 00:10:26.104 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-16 00:14:17.630514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.104 [2024-07-16 00:14:17.631795] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:26.104 [2024-07-16 00:14:17.633536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.104 passed 00:10:26.104 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-16 00:14:17.716389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.363 [2024-07-16 00:14:17.791885] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:26.363 [2024-07-16 00:14:17.815885] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:26.363 [2024-07-16 00:14:17.820991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.363 passed 00:10:26.363 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-16 00:14:17.904555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.363 [2024-07-16 00:14:17.905846] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:26.363 [2024-07-16 00:14:17.905903] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:26.363 [2024-07-16 00:14:17.907577] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.363 passed 00:10:26.622 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-16 00:14:17.990649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.622 [2024-07-16 00:14:18.081890] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:26.622 [2024-07-16 00:14:18.089906] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:26.622 [2024-07-16 00:14:18.097890] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:26.622 [2024-07-16 00:14:18.105890] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:26.622 [2024-07-16 00:14:18.135000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.622 passed 00:10:26.622 Test: admin_create_io_sq_verify_pc ...[2024-07-16 00:14:18.219657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.622 [2024-07-16 00:14:18.233916] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:26.880 [2024-07-16 00:14:18.251058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.880 passed 00:10:26.880 Test: admin_create_io_qp_max_qps ...[2024-07-16 00:14:18.333604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:28.254 [2024-07-16 00:14:19.433909] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:28.254 [2024-07-16 00:14:19.821796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:28.254 passed 00:10:28.513 Test: admin_create_io_sq_shared_cq ...[2024-07-16 00:14:19.903074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:28.513 [2024-07-16 00:14:20.034886] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:28.513 [2024-07-16 00:14:20.075030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:28.513 passed 00:10:28.513 00:10:28.513 Run Summary: Type Total Ran Passed Failed Inactive 00:10:28.513 suites 1 1 n/a 0 0 00:10:28.513 tests 18 18 18 0 0 00:10:28.513 asserts 360 360 360 0 n/a 00:10:28.513 00:10:28.513 Elapsed time = 1.564 seconds 00:10:28.513 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2233160 00:10:28.513 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2233160 ']' 00:10:28.513 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2233160 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2233160 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2233160' 00:10:28.771 killing process with pid 2233160 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2233160 00:10:28.771 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2233160 00:10:29.031 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:29.031 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:29.031 00:10:29.031 real 0m5.765s 00:10:29.031 user 0m16.155s 00:10:29.031 sys 0m0.527s 00:10:29.031 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.031 00:14:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:29.031 ************************************ 00:10:29.031 END TEST nvmf_vfio_user_nvme_compliance 00:10:29.031 ************************************ 00:10:29.031 00:14:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:29.031 00:14:20 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:29.031 00:14:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:29.031 00:14:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.031 00:14:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.031 ************************************ 00:10:29.031 START TEST nvmf_vfio_user_fuzz 00:10:29.031 ************************************ 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:29.032 * Looking for test storage... 00:10:29.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2233888 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2233888' 00:10:29.032 Process pid: 2233888 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2233888 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2233888 ']' 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.032 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:29.292 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.292 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:29.292 00:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:30.673 malloc0 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:30.673 00:14:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:02.769 Fuzzing completed. Shutting down the fuzz application 00:11:02.769 00:11:02.769 Dumping successful admin opcodes: 00:11:02.769 8, 9, 10, 24, 00:11:02.769 Dumping successful io opcodes: 00:11:02.769 0, 00:11:02.769 NS: 0x200003a1ef00 I/O qp, Total commands completed: 616008, total successful commands: 2378, random_seed: 913187904 00:11:02.769 NS: 0x200003a1ef00 admin qp, Total commands completed: 88320, total successful commands: 708, random_seed: 2824467840 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2233888 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2233888 ']' 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2233888 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2233888 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2233888' 00:11:02.769 killing process with pid 2233888 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2233888 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2233888 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:02.769 00:11:02.769 real 0m32.334s 00:11:02.769 user 0m34.460s 00:11:02.769 sys 0m25.928s 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.769 00:14:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:02.769 ************************************ 00:11:02.769 END TEST nvmf_vfio_user_fuzz 00:11:02.769 ************************************ 00:11:02.769 00:14:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:02.769 00:14:52 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:02.769 00:14:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:02.769 00:14:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.769 00:14:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.769 ************************************ 00:11:02.769 START TEST nvmf_host_management 00:11:02.769 ************************************ 00:11:02.769 00:14:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:02.769 * Looking for test storage... 00:11:02.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.770 00:14:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.340 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:03.341 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:03.341 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:03.341 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:03.341 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.341 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.600 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.600 00:14:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:11:03.600 00:11:03.600 --- 10.0.0.2 ping statistics --- 00:11:03.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.600 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:11:03.600 00:11:03.600 --- 10.0.0.1 ping statistics --- 00:11:03.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.600 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2239327 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2239327 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2239327 ']' 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.600 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.600 [2024-07-16 00:14:55.140529] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:11:03.601 [2024-07-16 00:14:55.140604] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.601 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.601 [2024-07-16 00:14:55.204130] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.860 [2024-07-16 00:14:55.312596] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.860 [2024-07-16 00:14:55.312652] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.860 [2024-07-16 00:14:55.312680] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.860 [2024-07-16 00:14:55.312691] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.860 [2024-07-16 00:14:55.312700] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.860 [2024-07-16 00:14:55.312792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.860 [2024-07-16 00:14:55.312897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.860 [2024-07-16 00:14:55.312923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:03.860 [2024-07-16 00:14:55.312925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.860 [2024-07-16 00:14:55.457559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:03.860 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.120 Malloc0 00:11:04.120 [2024-07-16 00:14:55.518822] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2239386 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2239386 /var/tmp/bdevperf.sock 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2239386 ']' 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:04.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:04.120 { 00:11:04.120 "params": { 00:11:04.120 "name": "Nvme$subsystem", 00:11:04.120 "trtype": "$TEST_TRANSPORT", 00:11:04.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.120 "adrfam": "ipv4", 00:11:04.120 "trsvcid": "$NVMF_PORT", 00:11:04.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.120 "hdgst": ${hdgst:-false}, 00:11:04.120 "ddgst": ${ddgst:-false} 00:11:04.120 }, 00:11:04.120 "method": "bdev_nvme_attach_controller" 00:11:04.120 } 00:11:04.120 EOF 00:11:04.120 )") 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:04.120 00:14:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:04.120 "params": { 00:11:04.120 "name": "Nvme0", 00:11:04.120 "trtype": "tcp", 00:11:04.120 "traddr": "10.0.0.2", 00:11:04.120 "adrfam": "ipv4", 00:11:04.120 "trsvcid": "4420", 00:11:04.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:04.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:04.120 "hdgst": false, 00:11:04.120 "ddgst": false 00:11:04.120 }, 00:11:04.120 "method": "bdev_nvme_attach_controller" 00:11:04.120 }' 00:11:04.120 [2024-07-16 00:14:55.595524] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:11:04.120 [2024-07-16 00:14:55.595601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239386 ] 00:11:04.120 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.120 [2024-07-16 00:14:55.655356] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.378 [2024-07-16 00:14:55.765204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.638 Running I/O for 10 seconds... 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=64 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 64 -ge 100 ']' 00:11:04.638 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=450 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 450 -ge 100 ']' 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:04.899 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:04.900 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.900 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.900 [2024-07-16 00:14:56.466020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.466864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283690 is same with the state(5) to be set 00:11:04.900 [2024-07-16 00:14:56.467031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.900 [2024-07-16 00:14:56.467423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.900 [2024-07-16 00:14:56.467438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.467978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.467993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.901 [2024-07-16 00:14:56.468775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.901 [2024-07-16 00:14:56.468791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.468806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.468823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.468837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.468853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.468868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.468894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.468910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.468927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.468942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.468958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.468972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.468989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.469003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.469034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.469069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.902 [2024-07-16 00:14:56.469101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfcca0 is same with the state(5) to be set 00:11:04.902 [2024-07-16 00:14:56.469204] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcfcca0 was disconnected and freed. reset controller. 00:11:04.902 [2024-07-16 00:14:56.469270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.902 [2024-07-16 00:14:56.469292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.902 [2024-07-16 00:14:56.469323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.902 [2024-07-16 00:14:56.469352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.902 [2024-07-16 00:14:56.469380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.902 [2024-07-16 00:14:56.469394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb980 is same with the state(5) to be set 00:11:04.902 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.902 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:04.902 [2024-07-16 00:14:56.470546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:04.902 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.902 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.902 task offset: 57344 on job bdev=Nvme0n1 fails 00:11:04.902 00:11:04.902 Latency(us) 00:11:04.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.902 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:04.902 Job: Nvme0n1 ended in about 0.39 seconds with error 00:11:04.902 Verification LBA range: start 0x0 length 0x400 00:11:04.902 Nvme0n1 : 0.39 1134.59 70.91 162.08 0.00 47992.65 11262.48 41360.50 00:11:04.902 =================================================================================================================== 00:11:04.902 Total : 1134.59 70.91 162.08 0.00 47992.65 11262.48 41360.50 00:11:04.902 [2024-07-16 00:14:56.472643] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.902 [2024-07-16 00:14:56.472685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8eb980 (9): Bad file descriptor 00:11:04.902 00:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.902 00:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:04.902 [2024-07-16 00:14:56.485276] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2239386 00:11:05.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2239386) - No such process 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:05.878 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:05.878 { 00:11:05.878 "params": { 00:11:05.878 "name": "Nvme$subsystem", 00:11:05.879 "trtype": "$TEST_TRANSPORT", 00:11:05.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:05.879 "adrfam": "ipv4", 00:11:05.879 "trsvcid": "$NVMF_PORT", 00:11:05.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:05.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:05.879 "hdgst": ${hdgst:-false}, 00:11:05.879 "ddgst": ${ddgst:-false} 00:11:05.879 }, 00:11:05.879 "method": "bdev_nvme_attach_controller" 00:11:05.879 } 00:11:05.879 EOF 00:11:05.879 )") 00:11:05.879 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:05.879 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:05.879 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:05.879 00:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:05.879 "params": { 00:11:05.879 "name": "Nvme0", 00:11:05.879 "trtype": "tcp", 00:11:05.879 "traddr": "10.0.0.2", 00:11:05.879 "adrfam": "ipv4", 00:11:05.879 "trsvcid": "4420", 00:11:05.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:05.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:05.879 "hdgst": false, 00:11:05.879 "ddgst": false 00:11:05.879 }, 00:11:05.879 "method": "bdev_nvme_attach_controller" 00:11:05.879 }' 00:11:06.139 [2024-07-16 00:14:57.525750] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:11:06.139 [2024-07-16 00:14:57.525825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239654 ] 00:11:06.139 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.139 [2024-07-16 00:14:57.587008] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.139 [2024-07-16 00:14:57.699017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.398 Running I/O for 1 seconds... 00:11:07.775 00:11:07.775 Latency(us) 00:11:07.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.775 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:07.775 Verification LBA range: start 0x0 length 0x400 00:11:07.775 Nvme0n1 : 1.00 1340.84 83.80 0.00 0.00 47025.31 10825.58 41554.68 00:11:07.775 =================================================================================================================== 00:11:07.775 Total : 1340.84 83.80 0.00 0.00 47025.31 10825.58 41554.68 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.775 rmmod nvme_tcp 00:11:07.775 rmmod nvme_fabrics 00:11:07.775 rmmod nvme_keyring 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2239327 ']' 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2239327 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2239327 ']' 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2239327 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2239327 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2239327' 00:11:07.775 killing process with pid 2239327 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2239327 00:11:07.775 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2239327 00:11:08.034 [2024-07-16 00:14:59.600331] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.034 00:14:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.592 00:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.592 00:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:10.592 00:11:10.592 real 0m8.797s 00:11:10.592 user 0m20.324s 00:11:10.592 sys 0m2.544s 00:11:10.592 00:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.592 00:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.592 ************************************ 00:11:10.592 END TEST nvmf_host_management 00:11:10.592 ************************************ 00:11:10.592 00:15:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:10.592 00:15:01 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:10.592 00:15:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.592 00:15:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.592 00:15:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.592 ************************************ 00:11:10.592 START TEST nvmf_lvol 00:11:10.592 ************************************ 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:10.592 * Looking for test storage... 00:11:10.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.592 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.593 00:15:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:12.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:12.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:12.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.497 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:12.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:11:12.498 00:11:12.498 --- 10.0.0.2 ping statistics --- 00:11:12.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.498 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:11:12.498 00:11:12.498 --- 10.0.0.1 ping statistics --- 00:11:12.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.498 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2241856 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2241856 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2241856 ']' 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.498 00:15:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.498 [2024-07-16 00:15:03.908245] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:11:12.498 [2024-07-16 00:15:03.908333] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.498 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.498 [2024-07-16 00:15:03.973759] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:12.498 [2024-07-16 00:15:04.079976] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.498 [2024-07-16 00:15:04.080032] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.498 [2024-07-16 00:15:04.080061] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.498 [2024-07-16 00:15:04.080073] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.498 [2024-07-16 00:15:04.080082] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.498 [2024-07-16 00:15:04.080136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.498 [2024-07-16 00:15:04.080916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.498 [2024-07-16 00:15:04.081049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.756 00:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.756 00:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:12.756 00:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.756 00:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.756 00:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.756 00:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.756 00:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:13.014 [2024-07-16 00:15:04.497545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.014 00:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.272 00:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:13.272 00:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.530 00:15:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:13.530 00:15:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:14.100 00:15:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:14.359 00:15:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b8feda5f-4a52-48eb-a7ce-cf3bbf2199be 00:11:14.359 00:15:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b8feda5f-4a52-48eb-a7ce-cf3bbf2199be lvol 20 00:11:14.618 00:15:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=872a2dd6-97bf-4ec9-9152-9ccd8efb88bb 00:11:14.618 00:15:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:14.876 00:15:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 872a2dd6-97bf-4ec9-9152-9ccd8efb88bb 00:11:15.134 00:15:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:15.392 [2024-07-16 00:15:06.812846] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.392 00:15:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.650 00:15:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2242296 00:11:15.650 00:15:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:15.650 00:15:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:15.650 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.587 00:15:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 872a2dd6-97bf-4ec9-9152-9ccd8efb88bb MY_SNAPSHOT 00:11:16.845 00:15:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=249a8070-c4a0-4392-8922-07d581f9c40e 00:11:16.845 00:15:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 872a2dd6-97bf-4ec9-9152-9ccd8efb88bb 30 00:11:17.103 00:15:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 249a8070-c4a0-4392-8922-07d581f9c40e MY_CLONE 00:11:17.361 00:15:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c379cd8c-aa10-43fe-94b5-389bec7998cd 00:11:17.361 00:15:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c379cd8c-aa10-43fe-94b5-389bec7998cd 00:11:17.927 00:15:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2242296 00:11:26.069 Initializing NVMe Controllers 00:11:26.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:26.069 Controller IO queue size 128, less than required. 00:11:26.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:26.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:26.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:26.069 Initialization complete. Launching workers. 00:11:26.069 ======================================================== 00:11:26.069 Latency(us) 00:11:26.069 Device Information : IOPS MiB/s Average min max 00:11:26.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10579.10 41.32 12100.19 1884.31 69048.93 00:11:26.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10476.50 40.92 12222.45 2407.37 79715.66 00:11:26.069 ======================================================== 00:11:26.069 Total : 21055.60 82.25 12161.03 1884.31 79715.66 00:11:26.069 00:11:26.069 00:15:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:26.339 00:15:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 872a2dd6-97bf-4ec9-9152-9ccd8efb88bb 00:11:26.597 00:15:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8feda5f-4a52-48eb-a7ce-cf3bbf2199be 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.856 rmmod nvme_tcp 00:11:26.856 rmmod nvme_fabrics 00:11:26.856 rmmod nvme_keyring 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2241856 ']' 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2241856 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2241856 ']' 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2241856 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2241856 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2241856' 00:11:26.856 killing process with pid 2241856 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2241856 00:11:26.856 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2241856 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.115 00:15:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.653 00:11:29.653 real 0m19.040s 00:11:29.653 user 1m5.022s 00:11:29.653 sys 0m5.708s 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:29.653 ************************************ 00:11:29.653 END TEST nvmf_lvol 00:11:29.653 ************************************ 00:11:29.653 00:15:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.653 00:15:20 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:29.653 00:15:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.653 00:15:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.653 00:15:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.653 ************************************ 00:11:29.653 START TEST nvmf_lvs_grow 00:11:29.653 ************************************ 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:29.653 * Looking for test storage... 00:11:29.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.653 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.654 00:15:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.559 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.559 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.559 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.559 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:11:31.559 00:11:31.559 --- 10.0.0.2 ping statistics --- 00:11:31.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.559 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:31.559 00:11:31.559 --- 10.0.0.1 ping statistics --- 00:11:31.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.559 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.559 00:15:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.559 00:15:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:31.559 00:15:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.559 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.559 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.559 00:15:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2246125 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2246125 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2246125 ']' 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.560 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 [2024-07-16 00:15:23.066671] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:11:31.560 [2024-07-16 00:15:23.066759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.560 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.560 [2024-07-16 00:15:23.132215] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.818 [2024-07-16 00:15:23.249809] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.818 [2024-07-16 00:15:23.249873] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.818 [2024-07-16 00:15:23.249900] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.818 [2024-07-16 00:15:23.249913] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.818 [2024-07-16 00:15:23.249925] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.818 [2024-07-16 00:15:23.249973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.818 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.818 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:31.818 00:15:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.818 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.818 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.818 00:15:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.818 00:15:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.075 [2024-07-16 00:15:23.672322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.333 ************************************ 00:11:32.333 START TEST lvs_grow_clean 00:11:32.333 ************************************ 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:32.333 00:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:32.592 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:32.592 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:32.850 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:32.850 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:32.850 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:33.107 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:33.107 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:33.107 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b22aeabc-4b02-4634-8f96-2cde841a80cc lvol 150 00:11:33.365 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a 00:11:33.365 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:33.365 00:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:33.622 [2024-07-16 00:15:25.161460] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:33.622 [2024-07-16 00:15:25.161548] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:33.622 true 00:11:33.622 00:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:33.622 00:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:33.880 00:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:33.880 00:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:34.137 00:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a 00:11:34.702 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:34.702 [2024-07-16 00:15:26.245023] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.702 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2246618 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2246618 /var/tmp/bdevperf.sock 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2246618 ']' 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:34.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.960 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:35.218 [2024-07-16 00:15:26.603280] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:11:35.218 [2024-07-16 00:15:26.603361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246618 ] 00:11:35.218 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.218 [2024-07-16 00:15:26.670120] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.218 [2024-07-16 00:15:26.786075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.477 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.477 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:35.477 00:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:36.043 Nvme0n1 00:11:36.043 00:15:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:36.302 [ 00:11:36.302 { 00:11:36.302 "name": "Nvme0n1", 00:11:36.302 "aliases": [ 00:11:36.302 "57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a" 00:11:36.302 ], 00:11:36.302 "product_name": "NVMe disk", 00:11:36.302 "block_size": 4096, 00:11:36.302 "num_blocks": 38912, 00:11:36.302 "uuid": "57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a", 00:11:36.302 "assigned_rate_limits": { 00:11:36.302 "rw_ios_per_sec": 0, 00:11:36.302 "rw_mbytes_per_sec": 0, 00:11:36.302 "r_mbytes_per_sec": 0, 00:11:36.302 "w_mbytes_per_sec": 0 00:11:36.302 }, 00:11:36.302 "claimed": false, 00:11:36.302 "zoned": false, 00:11:36.302 "supported_io_types": { 00:11:36.302 "read": true, 00:11:36.302 "write": true, 00:11:36.302 "unmap": true, 00:11:36.302 "flush": true, 00:11:36.302 "reset": true, 00:11:36.302 "nvme_admin": true, 00:11:36.302 "nvme_io": true, 00:11:36.302 "nvme_io_md": false, 00:11:36.302 "write_zeroes": true, 00:11:36.302 "zcopy": false, 00:11:36.302 "get_zone_info": false, 00:11:36.302 "zone_management": false, 00:11:36.302 "zone_append": false, 00:11:36.302 "compare": true, 00:11:36.302 "compare_and_write": true, 00:11:36.302 "abort": true, 00:11:36.302 "seek_hole": false, 00:11:36.302 "seek_data": false, 00:11:36.302 "copy": true, 00:11:36.302 "nvme_iov_md": false 00:11:36.302 }, 00:11:36.302 "memory_domains": [ 00:11:36.302 { 00:11:36.302 "dma_device_id": "system", 00:11:36.302 "dma_device_type": 1 00:11:36.302 } 00:11:36.302 ], 00:11:36.302 "driver_specific": { 00:11:36.302 "nvme": [ 00:11:36.302 { 00:11:36.302 "trid": { 00:11:36.302 "trtype": "TCP", 00:11:36.302 "adrfam": "IPv4", 00:11:36.302 "traddr": "10.0.0.2", 00:11:36.302 "trsvcid": "4420", 00:11:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:36.302 }, 00:11:36.302 "ctrlr_data": { 00:11:36.302 "cntlid": 1, 00:11:36.302 "vendor_id": "0x8086", 00:11:36.302 "model_number": "SPDK bdev Controller", 00:11:36.302 "serial_number": "SPDK0", 00:11:36.302 "firmware_revision": "24.09", 00:11:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:36.302 "oacs": { 00:11:36.302 "security": 0, 00:11:36.302 "format": 0, 00:11:36.302 "firmware": 0, 00:11:36.302 "ns_manage": 0 00:11:36.302 }, 00:11:36.302 "multi_ctrlr": true, 00:11:36.302 "ana_reporting": false 00:11:36.302 }, 00:11:36.302 "vs": { 00:11:36.302 "nvme_version": "1.3" 00:11:36.302 }, 00:11:36.302 "ns_data": { 00:11:36.302 "id": 1, 00:11:36.302 "can_share": true 00:11:36.302 } 00:11:36.302 } 00:11:36.302 ], 00:11:36.302 "mp_policy": "active_passive" 00:11:36.302 } 00:11:36.302 } 00:11:36.302 ] 00:11:36.302 00:15:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2246753 00:11:36.303 00:15:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:36.303 00:15:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:36.303 Running I/O for 10 seconds... 00:11:37.236 Latency(us) 00:11:37.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.236 Nvme0n1 : 1.00 13960.00 54.53 0.00 0.00 0.00 0.00 0.00 00:11:37.236 =================================================================================================================== 00:11:37.236 Total : 13960.00 54.53 0.00 0.00 0.00 0.00 0.00 00:11:37.236 00:11:38.171 00:15:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:38.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.429 Nvme0n1 : 2.00 14212.00 55.52 0.00 0.00 0.00 0.00 0.00 00:11:38.429 =================================================================================================================== 00:11:38.429 Total : 14212.00 55.52 0.00 0.00 0.00 0.00 0.00 00:11:38.429 00:11:38.429 true 00:11:38.429 00:15:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:38.429 00:15:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:38.688 00:15:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:38.688 00:15:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:38.688 00:15:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2246753 00:11:39.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.253 Nvme0n1 : 3.00 14210.67 55.51 0.00 0.00 0.00 0.00 0.00 00:11:39.253 =================================================================================================================== 00:11:39.253 Total : 14210.67 55.51 0.00 0.00 0.00 0.00 0.00 00:11:39.253 00:11:40.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.633 Nvme0n1 : 4.00 14257.75 55.69 0.00 0.00 0.00 0.00 0.00 00:11:40.633 =================================================================================================================== 00:11:40.633 Total : 14257.75 55.69 0.00 0.00 0.00 0.00 0.00 00:11:40.633 00:11:41.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.238 Nvme0n1 : 5.00 14363.20 56.11 0.00 0.00 0.00 0.00 0.00 00:11:41.238 =================================================================================================================== 00:11:41.238 Total : 14363.20 56.11 0.00 0.00 0.00 0.00 0.00 00:11:41.238 00:11:42.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.615 Nvme0n1 : 6.00 14444.00 56.42 0.00 0.00 0.00 0.00 0.00 00:11:42.615 =================================================================================================================== 00:11:42.615 Total : 14444.00 56.42 0.00 0.00 0.00 0.00 0.00 00:11:42.615 00:11:43.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.547 Nvme0n1 : 7.00 14455.86 56.47 0.00 0.00 0.00 0.00 0.00 00:11:43.547 =================================================================================================================== 00:11:43.547 Total : 14455.86 56.47 0.00 0.00 0.00 0.00 0.00 00:11:43.547 00:11:44.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.480 Nvme0n1 : 8.00 14520.88 56.72 0.00 0.00 0.00 0.00 0.00 00:11:44.480 =================================================================================================================== 00:11:44.480 Total : 14520.88 56.72 0.00 0.00 0.00 0.00 0.00 00:11:44.480 00:11:45.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.411 Nvme0n1 : 9.00 14564.44 56.89 0.00 0.00 0.00 0.00 0.00 00:11:45.411 =================================================================================================================== 00:11:45.411 Total : 14564.44 56.89 0.00 0.00 0.00 0.00 0.00 00:11:45.411 00:11:46.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.342 Nvme0n1 : 10.00 14567.20 56.90 0.00 0.00 0.00 0.00 0.00 00:11:46.342 =================================================================================================================== 00:11:46.342 Total : 14567.20 56.90 0.00 0.00 0.00 0.00 0.00 00:11:46.342 00:11:46.342 00:11:46.342 Latency(us) 00:11:46.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.342 Nvme0n1 : 10.01 14567.36 56.90 0.00 0.00 8780.61 5412.79 14660.65 00:11:46.342 =================================================================================================================== 00:11:46.342 Total : 14567.36 56.90 0.00 0.00 8780.61 5412.79 14660.65 00:11:46.342 0 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2246618 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2246618 ']' 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2246618 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2246618 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2246618' 00:11:46.342 killing process with pid 2246618 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2246618 00:11:46.342 Received shutdown signal, test time was about 10.000000 seconds 00:11:46.342 00:11:46.342 Latency(us) 00:11:46.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.342 =================================================================================================================== 00:11:46.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:46.342 00:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2246618 00:11:46.599 00:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:46.856 00:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:47.113 00:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:47.113 00:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:47.371 00:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:47.371 00:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:47.371 00:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:47.629 [2024-07-16 00:15:39.159688] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:47.629 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:47.887 request: 00:11:47.887 { 00:11:47.887 "uuid": "b22aeabc-4b02-4634-8f96-2cde841a80cc", 00:11:47.887 "method": "bdev_lvol_get_lvstores", 00:11:47.887 "req_id": 1 00:11:47.887 } 00:11:47.887 Got JSON-RPC error response 00:11:47.887 response: 00:11:47.887 { 00:11:47.887 "code": -19, 00:11:47.887 "message": "No such device" 00:11:47.887 } 00:11:47.887 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:47.887 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.887 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.887 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.887 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:48.146 aio_bdev 00:11:48.146 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a 00:11:48.146 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a 00:11:48.146 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:48.146 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:48.146 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:48.146 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:48.146 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:48.404 00:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a -t 2000 00:11:48.969 [ 00:11:48.969 { 00:11:48.969 "name": "57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a", 00:11:48.969 "aliases": [ 00:11:48.969 "lvs/lvol" 00:11:48.969 ], 00:11:48.969 "product_name": "Logical Volume", 00:11:48.969 "block_size": 4096, 00:11:48.969 "num_blocks": 38912, 00:11:48.969 "uuid": "57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a", 00:11:48.969 "assigned_rate_limits": { 00:11:48.969 "rw_ios_per_sec": 0, 00:11:48.969 "rw_mbytes_per_sec": 0, 00:11:48.969 "r_mbytes_per_sec": 0, 00:11:48.969 "w_mbytes_per_sec": 0 00:11:48.969 }, 00:11:48.969 "claimed": false, 00:11:48.969 "zoned": false, 00:11:48.969 "supported_io_types": { 00:11:48.969 "read": true, 00:11:48.969 "write": true, 00:11:48.969 "unmap": true, 00:11:48.969 "flush": false, 00:11:48.969 "reset": true, 00:11:48.969 "nvme_admin": false, 00:11:48.969 "nvme_io": false, 00:11:48.969 "nvme_io_md": false, 00:11:48.969 "write_zeroes": true, 00:11:48.969 "zcopy": false, 00:11:48.969 "get_zone_info": false, 00:11:48.969 "zone_management": false, 00:11:48.969 "zone_append": false, 00:11:48.969 "compare": false, 00:11:48.969 "compare_and_write": false, 00:11:48.969 "abort": false, 00:11:48.969 "seek_hole": true, 00:11:48.969 "seek_data": true, 00:11:48.969 "copy": false, 00:11:48.969 "nvme_iov_md": false 00:11:48.969 }, 00:11:48.969 "driver_specific": { 00:11:48.969 "lvol": { 00:11:48.969 "lvol_store_uuid": "b22aeabc-4b02-4634-8f96-2cde841a80cc", 00:11:48.969 "base_bdev": "aio_bdev", 00:11:48.969 "thin_provision": false, 00:11:48.969 "num_allocated_clusters": 38, 00:11:48.969 "snapshot": false, 00:11:48.969 "clone": false, 00:11:48.969 "esnap_clone": false 00:11:48.969 } 00:11:48.969 } 00:11:48.969 } 00:11:48.969 ] 00:11:48.969 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:48.969 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:48.969 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:48.969 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:48.969 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:48.969 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:49.227 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:49.227 00:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 57c6ea3c-a7c9-4a80-953b-b1cb9a5e726a 00:11:49.486 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b22aeabc-4b02-4634-8f96-2cde841a80cc 00:11:49.745 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:50.004 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:50.004 00:11:50.004 real 0m17.885s 00:11:50.004 user 0m17.286s 00:11:50.004 sys 0m1.986s 00:11:50.004 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.004 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:50.004 ************************************ 00:11:50.004 END TEST lvs_grow_clean 00:11:50.004 ************************************ 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:50.263 ************************************ 00:11:50.263 START TEST lvs_grow_dirty 00:11:50.263 ************************************ 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:50.263 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:50.520 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:50.520 00:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:50.778 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=76b47297-8123-4fcb-9078-98d8bd15a802 00:11:50.778 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:11:50.778 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:51.036 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:51.036 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:51.036 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 76b47297-8123-4fcb-9078-98d8bd15a802 lvol 150 00:11:51.293 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b 00:11:51.293 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.293 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:51.550 [2024-07-16 00:15:42.955257] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:51.550 [2024-07-16 00:15:42.955355] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:51.550 true 00:11:51.550 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:11:51.550 00:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:51.807 00:15:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:51.807 00:15:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:52.064 00:15:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b 00:11:52.322 00:15:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:52.322 [2024-07-16 00:15:43.926198] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.580 00:15:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2248679 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2248679 /var/tmp/bdevperf.sock 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2248679 ']' 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.580 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:52.838 [2024-07-16 00:15:44.228009] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:11:52.838 [2024-07-16 00:15:44.228099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248679 ] 00:11:52.838 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.838 [2024-07-16 00:15:44.289208] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.838 [2024-07-16 00:15:44.405514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.096 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.096 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:53.096 00:15:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:53.660 Nvme0n1 00:11:53.660 00:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:53.918 [ 00:11:53.918 { 00:11:53.918 "name": "Nvme0n1", 00:11:53.918 "aliases": [ 00:11:53.918 "8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b" 00:11:53.918 ], 00:11:53.918 "product_name": "NVMe disk", 00:11:53.918 "block_size": 4096, 00:11:53.918 "num_blocks": 38912, 00:11:53.918 "uuid": "8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b", 00:11:53.918 "assigned_rate_limits": { 00:11:53.918 "rw_ios_per_sec": 0, 00:11:53.918 "rw_mbytes_per_sec": 0, 00:11:53.918 "r_mbytes_per_sec": 0, 00:11:53.918 "w_mbytes_per_sec": 0 00:11:53.918 }, 00:11:53.918 "claimed": false, 00:11:53.918 "zoned": false, 00:11:53.918 "supported_io_types": { 00:11:53.918 "read": true, 00:11:53.918 "write": true, 00:11:53.918 "unmap": true, 00:11:53.918 "flush": true, 00:11:53.918 "reset": true, 00:11:53.918 "nvme_admin": true, 00:11:53.918 "nvme_io": true, 00:11:53.918 "nvme_io_md": false, 00:11:53.918 "write_zeroes": true, 00:11:53.918 "zcopy": false, 00:11:53.918 "get_zone_info": false, 00:11:53.918 "zone_management": false, 00:11:53.918 "zone_append": false, 00:11:53.918 "compare": true, 00:11:53.918 "compare_and_write": true, 00:11:53.918 "abort": true, 00:11:53.918 "seek_hole": false, 00:11:53.918 "seek_data": false, 00:11:53.918 "copy": true, 00:11:53.918 "nvme_iov_md": false 00:11:53.918 }, 00:11:53.918 "memory_domains": [ 00:11:53.918 { 00:11:53.918 "dma_device_id": "system", 00:11:53.918 "dma_device_type": 1 00:11:53.918 } 00:11:53.918 ], 00:11:53.918 "driver_specific": { 00:11:53.918 "nvme": [ 00:11:53.918 { 00:11:53.918 "trid": { 00:11:53.918 "trtype": "TCP", 00:11:53.918 "adrfam": "IPv4", 00:11:53.918 "traddr": "10.0.0.2", 00:11:53.918 "trsvcid": "4420", 00:11:53.918 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:53.918 }, 00:11:53.918 "ctrlr_data": { 00:11:53.918 "cntlid": 1, 00:11:53.918 "vendor_id": "0x8086", 00:11:53.918 "model_number": "SPDK bdev Controller", 00:11:53.918 "serial_number": "SPDK0", 00:11:53.918 "firmware_revision": "24.09", 00:11:53.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:53.918 "oacs": { 00:11:53.918 "security": 0, 00:11:53.918 "format": 0, 00:11:53.918 "firmware": 0, 00:11:53.918 "ns_manage": 0 00:11:53.918 }, 00:11:53.918 "multi_ctrlr": true, 00:11:53.918 "ana_reporting": false 00:11:53.918 }, 00:11:53.918 "vs": { 00:11:53.918 "nvme_version": "1.3" 00:11:53.918 }, 00:11:53.918 "ns_data": { 00:11:53.918 "id": 1, 00:11:53.918 "can_share": true 00:11:53.918 } 00:11:53.918 } 00:11:53.918 ], 00:11:53.918 "mp_policy": "active_passive" 00:11:53.918 } 00:11:53.918 } 00:11:53.918 ] 00:11:53.918 00:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2248814 00:11:53.918 00:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:53.918 00:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:53.918 Running I/O for 10 seconds... 00:11:54.896 Latency(us) 00:11:54.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.896 Nvme0n1 : 1.00 13953.00 54.50 0.00 0.00 0.00 0.00 0.00 00:11:54.896 =================================================================================================================== 00:11:54.896 Total : 13953.00 54.50 0.00 0.00 0.00 0.00 0.00 00:11:54.896 00:11:55.827 00:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:11:55.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.827 Nvme0n1 : 2.00 14153.00 55.29 0.00 0.00 0.00 0.00 0.00 00:11:55.827 =================================================================================================================== 00:11:55.827 Total : 14153.00 55.29 0.00 0.00 0.00 0.00 0.00 00:11:55.827 00:11:56.084 true 00:11:56.084 00:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:11:56.084 00:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:56.341 00:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:56.341 00:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:56.341 00:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2248814 00:11:56.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.907 Nvme0n1 : 3.00 14214.00 55.52 0.00 0.00 0.00 0.00 0.00 00:11:56.907 =================================================================================================================== 00:11:56.907 Total : 14214.00 55.52 0.00 0.00 0.00 0.00 0.00 00:11:56.907 00:11:57.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.840 Nvme0n1 : 4.00 14272.50 55.75 0.00 0.00 0.00 0.00 0.00 00:11:57.840 =================================================================================================================== 00:11:57.840 Total : 14272.50 55.75 0.00 0.00 0.00 0.00 0.00 00:11:57.840 00:11:59.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.215 Nvme0n1 : 5.00 14349.20 56.05 0.00 0.00 0.00 0.00 0.00 00:11:59.215 =================================================================================================================== 00:11:59.215 Total : 14349.20 56.05 0.00 0.00 0.00 0.00 0.00 00:11:59.215 00:12:00.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.149 Nvme0n1 : 6.00 14411.00 56.29 0.00 0.00 0.00 0.00 0.00 00:12:00.149 =================================================================================================================== 00:12:00.149 Total : 14411.00 56.29 0.00 0.00 0.00 0.00 0.00 00:12:00.149 00:12:01.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.083 Nvme0n1 : 7.00 14436.86 56.39 0.00 0.00 0.00 0.00 0.00 00:12:01.083 =================================================================================================================== 00:12:01.083 Total : 14436.86 56.39 0.00 0.00 0.00 0.00 0.00 00:12:01.083 00:12:02.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.014 Nvme0n1 : 8.00 14488.25 56.59 0.00 0.00 0.00 0.00 0.00 00:12:02.014 =================================================================================================================== 00:12:02.014 Total : 14488.25 56.59 0.00 0.00 0.00 0.00 0.00 00:12:02.014 00:12:02.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.980 Nvme0n1 : 9.00 14521.11 56.72 0.00 0.00 0.00 0.00 0.00 00:12:02.980 =================================================================================================================== 00:12:02.980 Total : 14521.11 56.72 0.00 0.00 0.00 0.00 0.00 00:12:02.980 00:12:03.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.912 Nvme0n1 : 10.00 14534.50 56.78 0.00 0.00 0.00 0.00 0.00 00:12:03.912 =================================================================================================================== 00:12:03.912 Total : 14534.50 56.78 0.00 0.00 0.00 0.00 0.00 00:12:03.912 00:12:03.912 00:12:03.912 Latency(us) 00:12:03.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.912 Nvme0n1 : 10.01 14536.66 56.78 0.00 0.00 8799.82 2196.67 16408.27 00:12:03.912 =================================================================================================================== 00:12:03.912 Total : 14536.66 56.78 0.00 0.00 8799.82 2196.67 16408.27 00:12:03.912 0 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2248679 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2248679 ']' 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2248679 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2248679 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2248679' 00:12:03.912 killing process with pid 2248679 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2248679 00:12:03.912 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.912 00:12:03.912 Latency(us) 00:12:03.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.912 =================================================================================================================== 00:12:03.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.912 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2248679 00:12:04.169 00:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:04.426 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:04.994 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:04.994 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:04.994 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:04.994 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:04.994 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2246125 00:12:04.994 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2246125 00:12:05.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2246125 Killed "${NVMF_APP[@]}" "$@" 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2250143 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2250143 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2250143 ']' 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.252 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:05.252 [2024-07-16 00:15:56.663142] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:05.252 [2024-07-16 00:15:56.663244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.252 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.252 [2024-07-16 00:15:56.727422] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.252 [2024-07-16 00:15:56.835460] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.252 [2024-07-16 00:15:56.835521] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.252 [2024-07-16 00:15:56.835549] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.252 [2024-07-16 00:15:56.835560] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.252 [2024-07-16 00:15:56.835569] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.252 [2024-07-16 00:15:56.835603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.510 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.510 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:05.510 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.510 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.510 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:05.510 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.510 00:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:05.769 [2024-07-16 00:15:57.251671] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:05.769 [2024-07-16 00:15:57.251817] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:05.769 [2024-07-16 00:15:57.251873] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:05.769 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:06.027 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b -t 2000 00:12:06.286 [ 00:12:06.286 { 00:12:06.286 "name": "8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b", 00:12:06.286 "aliases": [ 00:12:06.286 "lvs/lvol" 00:12:06.286 ], 00:12:06.286 "product_name": "Logical Volume", 00:12:06.286 "block_size": 4096, 00:12:06.286 "num_blocks": 38912, 00:12:06.286 "uuid": "8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b", 00:12:06.286 "assigned_rate_limits": { 00:12:06.286 "rw_ios_per_sec": 0, 00:12:06.286 "rw_mbytes_per_sec": 0, 00:12:06.286 "r_mbytes_per_sec": 0, 00:12:06.286 "w_mbytes_per_sec": 0 00:12:06.286 }, 00:12:06.286 "claimed": false, 00:12:06.286 "zoned": false, 00:12:06.286 "supported_io_types": { 00:12:06.286 "read": true, 00:12:06.286 "write": true, 00:12:06.286 "unmap": true, 00:12:06.286 "flush": false, 00:12:06.286 "reset": true, 00:12:06.286 "nvme_admin": false, 00:12:06.286 "nvme_io": false, 00:12:06.286 "nvme_io_md": false, 00:12:06.286 "write_zeroes": true, 00:12:06.286 "zcopy": false, 00:12:06.286 "get_zone_info": false, 00:12:06.286 "zone_management": false, 00:12:06.286 "zone_append": false, 00:12:06.286 "compare": false, 00:12:06.286 "compare_and_write": false, 00:12:06.286 "abort": false, 00:12:06.286 "seek_hole": true, 00:12:06.286 "seek_data": true, 00:12:06.286 "copy": false, 00:12:06.286 "nvme_iov_md": false 00:12:06.286 }, 00:12:06.286 "driver_specific": { 00:12:06.286 "lvol": { 00:12:06.286 "lvol_store_uuid": "76b47297-8123-4fcb-9078-98d8bd15a802", 00:12:06.286 "base_bdev": "aio_bdev", 00:12:06.286 "thin_provision": false, 00:12:06.286 "num_allocated_clusters": 38, 00:12:06.286 "snapshot": false, 00:12:06.286 "clone": false, 00:12:06.286 "esnap_clone": false 00:12:06.286 } 00:12:06.286 } 00:12:06.286 } 00:12:06.286 ] 00:12:06.286 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:06.286 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:06.286 00:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:06.545 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:06.545 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:06.545 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:06.803 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:06.803 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:07.061 [2024-07-16 00:15:58.524554] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:07.061 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:07.319 request: 00:12:07.319 { 00:12:07.319 "uuid": "76b47297-8123-4fcb-9078-98d8bd15a802", 00:12:07.319 "method": "bdev_lvol_get_lvstores", 00:12:07.319 "req_id": 1 00:12:07.319 } 00:12:07.319 Got JSON-RPC error response 00:12:07.319 response: 00:12:07.319 { 00:12:07.319 "code": -19, 00:12:07.319 "message": "No such device" 00:12:07.319 } 00:12:07.319 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:07.319 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.319 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.319 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.319 00:15:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:07.577 aio_bdev 00:12:07.577 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b 00:12:07.577 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b 00:12:07.577 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.577 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:07.577 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.577 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.577 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:07.835 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b -t 2000 00:12:08.092 [ 00:12:08.092 { 00:12:08.092 "name": "8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b", 00:12:08.092 "aliases": [ 00:12:08.092 "lvs/lvol" 00:12:08.092 ], 00:12:08.092 "product_name": "Logical Volume", 00:12:08.092 "block_size": 4096, 00:12:08.092 "num_blocks": 38912, 00:12:08.092 "uuid": "8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b", 00:12:08.092 "assigned_rate_limits": { 00:12:08.092 "rw_ios_per_sec": 0, 00:12:08.092 "rw_mbytes_per_sec": 0, 00:12:08.092 "r_mbytes_per_sec": 0, 00:12:08.092 "w_mbytes_per_sec": 0 00:12:08.092 }, 00:12:08.092 "claimed": false, 00:12:08.092 "zoned": false, 00:12:08.092 "supported_io_types": { 00:12:08.092 "read": true, 00:12:08.092 "write": true, 00:12:08.092 "unmap": true, 00:12:08.092 "flush": false, 00:12:08.092 "reset": true, 00:12:08.092 "nvme_admin": false, 00:12:08.092 "nvme_io": false, 00:12:08.092 "nvme_io_md": false, 00:12:08.092 "write_zeroes": true, 00:12:08.092 "zcopy": false, 00:12:08.092 "get_zone_info": false, 00:12:08.093 "zone_management": false, 00:12:08.093 "zone_append": false, 00:12:08.093 "compare": false, 00:12:08.093 "compare_and_write": false, 00:12:08.093 "abort": false, 00:12:08.093 "seek_hole": true, 00:12:08.093 "seek_data": true, 00:12:08.093 "copy": false, 00:12:08.093 "nvme_iov_md": false 00:12:08.093 }, 00:12:08.093 "driver_specific": { 00:12:08.093 "lvol": { 00:12:08.093 "lvol_store_uuid": "76b47297-8123-4fcb-9078-98d8bd15a802", 00:12:08.093 "base_bdev": "aio_bdev", 00:12:08.093 "thin_provision": false, 00:12:08.093 "num_allocated_clusters": 38, 00:12:08.093 "snapshot": false, 00:12:08.093 "clone": false, 00:12:08.093 "esnap_clone": false 00:12:08.093 } 00:12:08.093 } 00:12:08.093 } 00:12:08.093 ] 00:12:08.093 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:08.093 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:08.093 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:08.351 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:08.351 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:08.351 00:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:08.610 00:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:08.610 00:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8a40b51d-c0f5-4a9b-a9db-10fd290c8c6b 00:12:08.868 00:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76b47297-8123-4fcb-9078-98d8bd15a802 00:12:09.152 00:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.413 00:12:09.413 real 0m19.314s 00:12:09.413 user 0m49.044s 00:12:09.413 sys 0m4.802s 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:09.413 ************************************ 00:12:09.413 END TEST lvs_grow_dirty 00:12:09.413 ************************************ 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:09.413 00:16:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:09.413 nvmf_trace.0 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.671 rmmod nvme_tcp 00:12:09.671 rmmod nvme_fabrics 00:12:09.671 rmmod nvme_keyring 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2250143 ']' 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2250143 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2250143 ']' 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2250143 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2250143 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2250143' 00:12:09.671 killing process with pid 2250143 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2250143 00:12:09.671 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2250143 00:12:09.930 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.930 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.930 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.930 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.931 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.931 00:16:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.931 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.931 00:16:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.839 00:16:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.839 00:12:11.839 real 0m42.623s 00:12:11.839 user 1m12.216s 00:12:11.839 sys 0m8.687s 00:12:11.839 00:16:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.839 00:16:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.839 ************************************ 00:12:11.839 END TEST nvmf_lvs_grow 00:12:11.839 ************************************ 00:12:11.839 00:16:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:11.839 00:16:03 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:12.098 00:16:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:12.098 00:16:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.098 00:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:12.098 ************************************ 00:12:12.098 START TEST nvmf_bdev_io_wait 00:12:12.098 ************************************ 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:12.098 * Looking for test storage... 00:12:12.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.098 00:16:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.999 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:14.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:14.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:14.000 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:14.000 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.000 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:12:14.259 00:12:14.259 --- 10.0.0.2 ping statistics --- 00:12:14.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.259 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:12:14.259 00:12:14.259 --- 10.0.0.1 ping statistics --- 00:12:14.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.259 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2252666 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2252666 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2252666 ']' 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.259 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.259 [2024-07-16 00:16:05.719243] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:14.259 [2024-07-16 00:16:05.719332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.259 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.259 [2024-07-16 00:16:05.782373] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.519 [2024-07-16 00:16:05.891873] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.519 [2024-07-16 00:16:05.891940] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.519 [2024-07-16 00:16:05.891954] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.519 [2024-07-16 00:16:05.891964] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.519 [2024-07-16 00:16:05.891973] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.519 [2024-07-16 00:16:05.892099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.519 [2024-07-16 00:16:05.892163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.519 [2024-07-16 00:16:05.892207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.519 [2024-07-16 00:16:05.892209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.519 00:16:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 [2024-07-16 00:16:06.025818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 Malloc0 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.519 [2024-07-16 00:16:06.091648] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2252695 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2252696 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2252699 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.519 { 00:12:14.519 "params": { 00:12:14.519 "name": "Nvme$subsystem", 00:12:14.519 "trtype": "$TEST_TRANSPORT", 00:12:14.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.519 "adrfam": "ipv4", 00:12:14.519 "trsvcid": "$NVMF_PORT", 00:12:14.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.519 "hdgst": ${hdgst:-false}, 00:12:14.519 "ddgst": ${ddgst:-false} 00:12:14.519 }, 00:12:14.519 "method": "bdev_nvme_attach_controller" 00:12:14.519 } 00:12:14.519 EOF 00:12:14.519 )") 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2252701 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.519 { 00:12:14.519 "params": { 00:12:14.519 "name": "Nvme$subsystem", 00:12:14.519 "trtype": "$TEST_TRANSPORT", 00:12:14.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.519 "adrfam": "ipv4", 00:12:14.519 "trsvcid": "$NVMF_PORT", 00:12:14.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.519 "hdgst": ${hdgst:-false}, 00:12:14.519 "ddgst": ${ddgst:-false} 00:12:14.519 }, 00:12:14.519 "method": "bdev_nvme_attach_controller" 00:12:14.519 } 00:12:14.519 EOF 00:12:14.519 )") 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:14.519 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.520 { 00:12:14.520 "params": { 00:12:14.520 "name": "Nvme$subsystem", 00:12:14.520 "trtype": "$TEST_TRANSPORT", 00:12:14.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.520 "adrfam": "ipv4", 00:12:14.520 "trsvcid": "$NVMF_PORT", 00:12:14.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.520 "hdgst": ${hdgst:-false}, 00:12:14.520 "ddgst": ${ddgst:-false} 00:12:14.520 }, 00:12:14.520 "method": "bdev_nvme_attach_controller" 00:12:14.520 } 00:12:14.520 EOF 00:12:14.520 )") 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.520 { 00:12:14.520 "params": { 00:12:14.520 "name": "Nvme$subsystem", 00:12:14.520 "trtype": "$TEST_TRANSPORT", 00:12:14.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.520 "adrfam": "ipv4", 00:12:14.520 "trsvcid": "$NVMF_PORT", 00:12:14.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.520 "hdgst": ${hdgst:-false}, 00:12:14.520 "ddgst": ${ddgst:-false} 00:12:14.520 }, 00:12:14.520 "method": "bdev_nvme_attach_controller" 00:12:14.520 } 00:12:14.520 EOF 00:12:14.520 )") 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2252695 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.520 "params": { 00:12:14.520 "name": "Nvme1", 00:12:14.520 "trtype": "tcp", 00:12:14.520 "traddr": "10.0.0.2", 00:12:14.520 "adrfam": "ipv4", 00:12:14.520 "trsvcid": "4420", 00:12:14.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.520 "hdgst": false, 00:12:14.520 "ddgst": false 00:12:14.520 }, 00:12:14.520 "method": "bdev_nvme_attach_controller" 00:12:14.520 }' 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.520 "params": { 00:12:14.520 "name": "Nvme1", 00:12:14.520 "trtype": "tcp", 00:12:14.520 "traddr": "10.0.0.2", 00:12:14.520 "adrfam": "ipv4", 00:12:14.520 "trsvcid": "4420", 00:12:14.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.520 "hdgst": false, 00:12:14.520 "ddgst": false 00:12:14.520 }, 00:12:14.520 "method": "bdev_nvme_attach_controller" 00:12:14.520 }' 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.520 "params": { 00:12:14.520 "name": "Nvme1", 00:12:14.520 "trtype": "tcp", 00:12:14.520 "traddr": "10.0.0.2", 00:12:14.520 "adrfam": "ipv4", 00:12:14.520 "trsvcid": "4420", 00:12:14.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.520 "hdgst": false, 00:12:14.520 "ddgst": false 00:12:14.520 }, 00:12:14.520 "method": "bdev_nvme_attach_controller" 00:12:14.520 }' 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.520 00:16:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.520 "params": { 00:12:14.520 "name": "Nvme1", 00:12:14.520 "trtype": "tcp", 00:12:14.520 "traddr": "10.0.0.2", 00:12:14.520 "adrfam": "ipv4", 00:12:14.520 "trsvcid": "4420", 00:12:14.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.520 "hdgst": false, 00:12:14.520 "ddgst": false 00:12:14.520 }, 00:12:14.520 "method": "bdev_nvme_attach_controller" 00:12:14.520 }' 00:12:14.778 [2024-07-16 00:16:06.140160] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:14.778 [2024-07-16 00:16:06.140160] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:14.778 [2024-07-16 00:16:06.140161] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:14.778 [2024-07-16 00:16:06.140161] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:14.778 [2024-07-16 00:16:06.140263] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 00:16:06.140263] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 00:16:06.140264] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 00:16:06.140263] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:14.778 --proc-type=auto ] 00:12:14.778 --proc-type=auto ] 00:12:14.778 --proc-type=auto ] 00:12:14.778 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.778 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.778 [2024-07-16 00:16:06.314177] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.778 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.778 [2024-07-16 00:16:06.388852] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.036 [2024-07-16 00:16:06.412892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:15.036 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.036 [2024-07-16 00:16:06.480118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:15.036 [2024-07-16 00:16:06.486109] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.036 [2024-07-16 00:16:06.584962] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.036 [2024-07-16 00:16:06.586509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:15.293 [2024-07-16 00:16:06.687512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:15.294 Running I/O for 1 seconds... 00:12:15.294 Running I/O for 1 seconds... 00:12:15.294 Running I/O for 1 seconds... 00:12:15.552 Running I/O for 1 seconds... 00:12:16.487 00:12:16.487 Latency(us) 00:12:16.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.487 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:16.487 Nvme1n1 : 1.01 11381.19 44.46 0.00 0.00 11187.61 7233.23 20971.52 00:12:16.487 =================================================================================================================== 00:12:16.487 Total : 11381.19 44.46 0.00 0.00 11187.61 7233.23 20971.52 00:12:16.487 00:12:16.487 Latency(us) 00:12:16.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.487 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:16.487 Nvme1n1 : 1.01 9933.14 38.80 0.00 0.00 12829.75 7621.59 23301.69 00:12:16.487 =================================================================================================================== 00:12:16.487 Total : 9933.14 38.80 0.00 0.00 12829.75 7621.59 23301.69 00:12:16.487 00:12:16.487 Latency(us) 00:12:16.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.487 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:16.487 Nvme1n1 : 1.01 2161.27 8.44 0.00 0.00 58997.09 8204.14 87381.33 00:12:16.487 =================================================================================================================== 00:12:16.487 Total : 2161.27 8.44 0.00 0.00 58997.09 8204.14 87381.33 00:12:16.487 00:12:16.487 Latency(us) 00:12:16.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.487 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:16.487 Nvme1n1 : 1.00 182873.82 714.35 0.00 0.00 697.17 270.03 952.70 00:12:16.487 =================================================================================================================== 00:12:16.487 Total : 182873.82 714.35 0.00 0.00 697.17 270.03 952.70 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2252696 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2252699 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2252701 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.745 rmmod nvme_tcp 00:12:16.745 rmmod nvme_fabrics 00:12:16.745 rmmod nvme_keyring 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2252666 ']' 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2252666 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2252666 ']' 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2252666 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.745 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2252666 00:12:17.003 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:17.003 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:17.003 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2252666' 00:12:17.003 killing process with pid 2252666 00:12:17.003 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2252666 00:12:17.003 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2252666 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.261 00:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.167 00:16:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.167 00:12:19.167 real 0m7.194s 00:12:19.167 user 0m16.014s 00:12:19.167 sys 0m3.562s 00:12:19.167 00:16:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.167 00:16:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:19.167 ************************************ 00:12:19.167 END TEST nvmf_bdev_io_wait 00:12:19.167 ************************************ 00:12:19.167 00:16:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:19.167 00:16:10 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:19.167 00:16:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.167 00:16:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.167 00:16:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.167 ************************************ 00:12:19.167 START TEST nvmf_queue_depth 00:12:19.167 ************************************ 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:19.167 * Looking for test storage... 00:12:19.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.167 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.427 00:16:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:21.329 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:21.329 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:21.329 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:21.329 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:21.329 00:12:21.329 --- 10.0.0.2 ping statistics --- 00:12:21.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.329 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:21.329 00:12:21.329 --- 10.0.0.1 ping statistics --- 00:12:21.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.329 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:21.329 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2254924 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2254924 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2254924 ']' 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.330 00:16:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.588 [2024-07-16 00:16:12.955457] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:21.588 [2024-07-16 00:16:12.955530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.588 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.588 [2024-07-16 00:16:13.022102] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.588 [2024-07-16 00:16:13.135475] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.588 [2024-07-16 00:16:13.135528] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.588 [2024-07-16 00:16:13.135557] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.588 [2024-07-16 00:16:13.135568] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.588 [2024-07-16 00:16:13.135578] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.588 [2024-07-16 00:16:13.135612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 [2024-07-16 00:16:13.288169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 Malloc0 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 [2024-07-16 00:16:13.358034] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2255058 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2255058 /var/tmp/bdevperf.sock 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2255058 ']' 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:21.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.846 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 [2024-07-16 00:16:13.404702] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:21.846 [2024-07-16 00:16:13.404782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255058 ] 00:12:21.846 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.104 [2024-07-16 00:16:13.466994] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.104 [2024-07-16 00:16:13.582684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.104 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.104 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:22.104 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:22.104 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.104 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.362 NVMe0n1 00:12:22.362 00:16:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.362 00:16:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:22.619 Running I/O for 10 seconds... 00:12:32.604 00:12:32.604 Latency(us) 00:12:32.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.604 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:32.604 Verification LBA range: start 0x0 length 0x4000 00:12:32.604 NVMe0n1 : 10.11 8104.40 31.66 0.00 0.00 125838.11 25243.50 84662.80 00:12:32.604 =================================================================================================================== 00:12:32.604 Total : 8104.40 31.66 0.00 0.00 125838.11 25243.50 84662.80 00:12:32.604 0 00:12:32.604 00:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2255058 00:12:32.604 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2255058 ']' 00:12:32.604 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2255058 00:12:32.604 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:32.604 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:32.604 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2255058 00:12:32.862 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:32.862 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:32.862 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2255058' 00:12:32.862 killing process with pid 2255058 00:12:32.862 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2255058 00:12:32.862 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.862 00:12:32.862 Latency(us) 00:12:32.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.862 =================================================================================================================== 00:12:32.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:32.862 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2255058 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.121 rmmod nvme_tcp 00:12:33.121 rmmod nvme_fabrics 00:12:33.121 rmmod nvme_keyring 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2254924 ']' 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2254924 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2254924 ']' 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2254924 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2254924 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2254924' 00:12:33.121 killing process with pid 2254924 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2254924 00:12:33.121 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2254924 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.380 00:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.920 00:16:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.920 00:12:35.920 real 0m16.235s 00:12:35.920 user 0m22.883s 00:12:35.920 sys 0m3.111s 00:12:35.920 00:16:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.920 00:16:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.920 ************************************ 00:12:35.920 END TEST nvmf_queue_depth 00:12:35.920 ************************************ 00:12:35.920 00:16:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:35.920 00:16:26 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:35.920 00:16:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.920 00:16:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.920 00:16:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.920 ************************************ 00:12:35.920 START TEST nvmf_target_multipath 00:12:35.920 ************************************ 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:35.920 * Looking for test storage... 00:12:35.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.920 00:16:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:37.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:37.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:37.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:37.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.828 00:16:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:12:37.828 00:12:37.828 --- 10.0.0.2 ping statistics --- 00:12:37.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.828 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:37.828 00:12:37.828 --- 10.0.0.1 ping statistics --- 00:12:37.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.828 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.828 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:37.829 only one NIC for nvmf test 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.829 rmmod nvme_tcp 00:12:37.829 rmmod nvme_fabrics 00:12:37.829 rmmod nvme_keyring 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.829 00:16:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.740 00:12:39.740 real 0m4.210s 00:12:39.740 user 0m0.779s 00:12:39.740 sys 0m1.404s 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.740 00:16:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:39.740 ************************************ 00:12:39.740 END TEST nvmf_target_multipath 00:12:39.740 ************************************ 00:12:39.740 00:16:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.741 00:16:31 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:39.741 00:16:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.741 00:16:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.741 00:16:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.741 ************************************ 00:12:39.741 START TEST nvmf_zcopy 00:12:39.741 ************************************ 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:39.741 * Looking for test storage... 00:12:39.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.741 00:16:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:41.648 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.649 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.649 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.649 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.649 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.649 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:41.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:41.907 00:12:41.907 --- 10.0.0.2 ping statistics --- 00:12:41.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.907 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:12:41.907 00:12:41.907 --- 10.0.0.1 ping statistics --- 00:12:41.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.907 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.907 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2260112 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2260112 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2260112 ']' 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.908 00:16:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.908 [2024-07-16 00:16:33.454196] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:41.908 [2024-07-16 00:16:33.454284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.908 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.166 [2024-07-16 00:16:33.524359] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.166 [2024-07-16 00:16:33.641023] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.166 [2024-07-16 00:16:33.641084] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.166 [2024-07-16 00:16:33.641100] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.166 [2024-07-16 00:16:33.641113] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.166 [2024-07-16 00:16:33.641125] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.166 [2024-07-16 00:16:33.641155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 [2024-07-16 00:16:34.423918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 [2024-07-16 00:16:34.440079] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 malloc0 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:43.102 { 00:12:43.102 "params": { 00:12:43.102 "name": "Nvme$subsystem", 00:12:43.102 "trtype": "$TEST_TRANSPORT", 00:12:43.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.102 "adrfam": "ipv4", 00:12:43.102 "trsvcid": "$NVMF_PORT", 00:12:43.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.102 "hdgst": ${hdgst:-false}, 00:12:43.102 "ddgst": ${ddgst:-false} 00:12:43.102 }, 00:12:43.102 "method": "bdev_nvme_attach_controller" 00:12:43.102 } 00:12:43.102 EOF 00:12:43.102 )") 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:43.102 00:16:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:43.102 "params": { 00:12:43.102 "name": "Nvme1", 00:12:43.102 "trtype": "tcp", 00:12:43.102 "traddr": "10.0.0.2", 00:12:43.102 "adrfam": "ipv4", 00:12:43.102 "trsvcid": "4420", 00:12:43.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.102 "hdgst": false, 00:12:43.102 "ddgst": false 00:12:43.102 }, 00:12:43.102 "method": "bdev_nvme_attach_controller" 00:12:43.102 }' 00:12:43.102 [2024-07-16 00:16:34.524378] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:43.102 [2024-07-16 00:16:34.524452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260265 ] 00:12:43.102 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.102 [2024-07-16 00:16:34.587778] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.102 [2024-07-16 00:16:34.706451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.362 Running I/O for 10 seconds... 00:12:53.343 00:12:53.343 Latency(us) 00:12:53.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.343 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:53.343 Verification LBA range: start 0x0 length 0x1000 00:12:53.343 Nvme1n1 : 10.01 5894.01 46.05 0.00 0.00 21657.38 1808.31 31068.92 00:12:53.343 =================================================================================================================== 00:12:53.343 Total : 5894.01 46.05 0.00 0.00 21657.38 1808.31 31068.92 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2261578 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.600 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.600 { 00:12:53.600 "params": { 00:12:53.600 "name": "Nvme$subsystem", 00:12:53.600 "trtype": "$TEST_TRANSPORT", 00:12:53.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.601 "adrfam": "ipv4", 00:12:53.601 "trsvcid": "$NVMF_PORT", 00:12:53.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.601 "hdgst": ${hdgst:-false}, 00:12:53.601 "ddgst": ${ddgst:-false} 00:12:53.601 }, 00:12:53.601 "method": "bdev_nvme_attach_controller" 00:12:53.601 } 00:12:53.601 EOF 00:12:53.601 )") 00:12:53.601 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:53.601 [2024-07-16 00:16:45.201853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.601 [2024-07-16 00:16:45.201937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.601 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:53.601 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:53.601 00:16:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.601 "params": { 00:12:53.601 "name": "Nvme1", 00:12:53.601 "trtype": "tcp", 00:12:53.601 "traddr": "10.0.0.2", 00:12:53.601 "adrfam": "ipv4", 00:12:53.601 "trsvcid": "4420", 00:12:53.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.601 "hdgst": false, 00:12:53.601 "ddgst": false 00:12:53.601 }, 00:12:53.601 "method": "bdev_nvme_attach_controller" 00:12:53.601 }' 00:12:53.601 [2024-07-16 00:16:45.209810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.601 [2024-07-16 00:16:45.209837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.872 [2024-07-16 00:16:45.217822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.872 [2024-07-16 00:16:45.217846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.872 [2024-07-16 00:16:45.225838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.872 [2024-07-16 00:16:45.225860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.872 [2024-07-16 00:16:45.233872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.872 [2024-07-16 00:16:45.233901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.240041] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:12:53.873 [2024-07-16 00:16:45.240115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261578 ] 00:12:53.873 [2024-07-16 00:16:45.241899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.241920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.249922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.249950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.257945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.257966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.265968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.265989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.873 [2024-07-16 00:16:45.273995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.274018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.282007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.282030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.290026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.290048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.298048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.298070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.304305] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.873 [2024-07-16 00:16:45.306067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.306087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.314128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.314184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.322130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.322176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.330135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.330174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.338172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.338193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.346195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.346215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.354215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.354253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.362252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.362277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.370289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.370321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.378333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.378373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.386320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.386346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.394342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.394366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.402362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.402387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.410384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.410408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.418443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.418469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.426418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.426464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.426650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.873 [2024-07-16 00:16:45.434451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.434476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.442492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.442524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.450532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.450571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.458550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.458589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.466581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.466625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.873 [2024-07-16 00:16:45.474607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.873 [2024-07-16 00:16:45.474651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.156 [2024-07-16 00:16:45.482623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.156 [2024-07-16 00:16:45.482665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.490647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.490689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.498632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.498658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.506688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.506730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.514715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.514759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.522720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.522753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.530719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.530743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.538739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.538764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.546849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.546884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.554870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.554906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.562897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.562935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.570934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.570957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.578952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.578976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.586970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.586994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.594987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.595008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.602998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.603018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.611020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.611040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.619042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.619063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.627064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.627086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.635086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.635109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.643106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.643127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.651132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.651153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.659169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.659189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.667193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.667228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.675231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.675258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.683254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.683280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.691275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.691299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.699311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.699335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.707329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.707354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.715357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.715382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.723382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.723416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.731417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.731448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.739430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.739456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 Running I/O for 5 seconds... 00:12:54.157 [2024-07-16 00:16:45.747523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.747558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.157 [2024-07-16 00:16:45.764052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.157 [2024-07-16 00:16:45.764090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.778668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.778709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.792251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.792289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.807622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.807661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.822204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.822258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.836967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.837003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.850325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.850363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.863874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.863945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.878385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.878423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.892512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.892550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.906256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.906306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.920146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.920180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.933976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.934011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.947679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.947715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.960253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.960287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.973421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.973463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.985354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.985389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:45.998749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:45.998777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:46.011725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:46.011758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.416 [2024-07-16 00:16:46.025251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.416 [2024-07-16 00:16:46.025285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.038240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.038275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.050965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.051000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.063248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.063282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.077109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.077138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.089755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.089782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.102783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.102816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.115704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.115731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.128186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.128214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.141026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.141061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.153814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.153848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.166711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.166744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.178656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.178691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.191120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.191149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.203629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.203664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.216609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.216656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.229319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.229348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.241737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.241772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.254358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.254386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.267548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.267581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.675 [2024-07-16 00:16:46.280300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.675 [2024-07-16 00:16:46.280327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.292739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.292767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.305361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.305388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.318680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.318707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.331826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.331874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.345256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.345290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.358484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.358512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.371930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.371964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.385285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.385316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.398224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.398251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.935 [2024-07-16 00:16:46.411191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.935 [2024-07-16 00:16:46.411218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.423970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.423998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.436786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.436819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.450223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.450256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.463656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.463689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.476644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.476677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.489242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.489276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.502051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.502079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.514471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.514499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.527838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.527889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.936 [2024-07-16 00:16:46.540954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.936 [2024-07-16 00:16:46.540982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.553909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.553937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.566719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.566746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.579111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.579139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.592202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.592244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.605100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.605129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.617843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.617901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.630812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.630844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.644461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.644494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.657056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.657090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.670693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.670740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.683603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.683635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.696571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.696604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.709274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.709303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.722493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.722525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.736002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.736036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.749031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.749065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.762629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.762662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.775383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.775415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.788397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.788444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.196 [2024-07-16 00:16:46.800847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.196 [2024-07-16 00:16:46.800904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.813854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.813914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.826994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.827022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.839600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.839634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.852456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.852489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.865770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.865802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.879516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.879548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.892555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.892587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.905547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.905580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.918787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.918814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.931506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.931532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.944395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.944421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.957079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.957113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.970425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.970457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.983532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.983564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:46.997422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:46.997454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:47.011275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:47.011308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:47.024276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:47.024309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:47.037112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:47.037145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:47.049999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:47.050035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.455 [2024-07-16 00:16:47.062932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.455 [2024-07-16 00:16:47.062969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.076986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.077020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.089936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.089971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.103446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.103479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.116425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.116459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.129065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.129100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.142356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.142390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.155747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.155781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.168532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.168567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.181379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.181413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.194107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.194136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.207191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.207238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.219961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.219996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.233225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.233272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.245541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.245573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.258771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.258798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.270581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.270628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.284178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.284221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.296957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.296991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.310162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.310191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.715 [2024-07-16 00:16:47.323257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.715 [2024-07-16 00:16:47.323289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.336767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.336800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.349509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.349542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.362900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.362928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.374742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.374777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.388102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.388137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.401111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.401140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.413902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.413930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.426592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.426619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.439400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.439441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.452703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.452735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.466342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.466374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.480027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.480054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.491807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.491838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.505507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.505533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.517765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.517791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.530724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.530750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.543671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.543698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.556283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.556315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.568784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.568811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.974 [2024-07-16 00:16:47.581497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.974 [2024-07-16 00:16:47.581526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.594641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.594669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.607895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.607929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.620939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.620973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.634049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.634083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.647072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.647100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.659745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.659779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.673402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.673434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.686005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.686042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.699032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.699066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.712231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.712264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.724591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.724624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.737622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.737650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.750617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.750662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.763242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.763276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.776396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.776431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.789810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.789845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.803358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.803392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.815801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.815828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.828637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.828666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-16 00:16:47.841428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-16 00:16:47.841456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.854722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.854758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.868958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.868991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.882547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.882575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.894939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.894974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.908809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.908842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.921513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.921547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.935234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.935286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.948343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.948377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.961414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.961446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.974781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.974815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:47.988073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:47.988103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.000621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.000652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.013800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.013828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.026096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.026124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.039196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.039229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.052273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.052306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.065724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.065758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.078746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.078774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.492 [2024-07-16 00:16:48.091542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.492 [2024-07-16 00:16:48.091570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.105219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.105254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.118672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.118699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.131816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.131848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.145014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.145048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.157783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.157817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.170562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.170596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.183034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.183073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.195401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.195431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.207940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.207970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.220223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.220257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.233230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.233264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.245636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.245664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.258273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.258306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.271151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.271194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.283791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.283825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.297038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.297073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.310288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.310320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.322796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.322830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.335465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.335497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.348361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.348388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.752 [2024-07-16 00:16:48.361419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.752 [2024-07-16 00:16:48.361453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.374317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.374350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.387116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.387150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.399452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.399483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.413239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.413272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.426477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.426510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.438886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.438921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.451822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.451849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.465120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.465154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.478337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.478364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.490607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.490634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.503119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.503148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.515364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.515391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.528298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.528324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.541763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.541790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.555169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.555221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.568427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.568461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.581545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.581572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.593986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.594014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.606600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.606635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.012 [2024-07-16 00:16:48.618637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.012 [2024-07-16 00:16:48.618684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.632273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.632300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.644941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.644970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.657673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.657705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.670816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.670849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.684127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.684176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.696422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.696454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.709394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.709426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.722539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.722571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.735631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.735663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.748777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.748809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.761241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.761272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.774094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.774128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.787159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.787206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.800552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.800583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.812904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.812939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.825689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.825716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.838327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.838353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.851135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.851169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.864594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.864628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.272 [2024-07-16 00:16:48.877975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.272 [2024-07-16 00:16:48.878011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.891818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.891866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.904805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.904839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.917821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.917848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.930582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.930614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.943630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.943662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.957329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.957356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.969198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.969246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.983100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.983129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:48.994819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:48.994851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.008527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.008559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.022165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.022198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.035290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.035322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.048046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.048081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.060923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.060951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.073798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.073831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.087275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.087322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.100916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.100950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.113859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.113898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.126778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.126811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.532 [2024-07-16 00:16:49.139939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.532 [2024-07-16 00:16:49.139973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.790 [2024-07-16 00:16:49.152854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.152911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.166039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.166074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.179454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.179487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.192230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.192257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.205657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.205690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.219138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.219181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.232413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.232448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.245074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.245109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.257764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.257798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.270238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.270265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.282836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.282886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.295988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.296022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.309466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.309499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.322996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.323030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.336352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.336385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.349597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.349631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.362826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.362855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.375948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.375977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.388476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.388505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.791 [2024-07-16 00:16:49.401289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.791 [2024-07-16 00:16:49.401326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.414247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.414275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.427285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.427346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.440789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.440817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.453339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.453366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.466600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.466634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.479582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.479615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.492718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.492767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.506243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.506278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.519722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.519770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.533137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.533171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.546003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.546039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.558662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.558697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.572847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.572892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.586103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.586138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.599608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.599636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.612545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.612573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.625790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.625837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.639564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.639599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.050 [2024-07-16 00:16:49.653190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.050 [2024-07-16 00:16:49.653246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.666947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.666982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.680317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.680350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.693262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.693295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.706911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.706950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.719539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.719566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.732135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.732183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.745102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.745130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.758040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.758074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.771126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.771160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.784208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.784240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.796752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.796779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.808955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.808989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.821971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.821999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.834123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.834157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.848240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.848272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.861791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.861818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.874264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.874298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.887486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.887514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.899566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.899612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.309 [2024-07-16 00:16:49.913556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.309 [2024-07-16 00:16:49.913588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:49.926601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:49.926634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:49.939913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:49.939947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:49.953829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:49.953885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:49.967048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:49.967076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:49.979821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:49.979854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:49.993442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:49.993474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.006682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.006709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.019341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.019380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.032342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.032375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.045142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.045191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.058896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.058930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.072250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.072282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.085408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.085436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.098765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.098798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.112358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.112391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.125508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.125541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.139175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.139207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.152933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.152978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.166169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.166202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.570 [2024-07-16 00:16:50.178658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.570 [2024-07-16 00:16:50.178691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.192371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.192405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.205736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.205769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.218672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.218705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.231475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.231509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.244632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.244665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.258106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.258139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.271298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.271330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.283602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.283634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.297541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.297573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.309644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.309686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.323229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.323271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.335228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.335270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.347547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.347595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.360112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.360147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.372829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.372885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.386889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.386914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.399680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.399726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.412181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.412213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.426077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.426111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.830 [2024-07-16 00:16:50.438593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.830 [2024-07-16 00:16:50.438625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.452676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.452710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.464594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.464628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.478002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.478030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.491164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.491214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.504453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.504501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.517845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.517905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.531396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.531430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.544632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.544666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.557485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.557519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.570622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.091 [2024-07-16 00:16:50.570655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.091 [2024-07-16 00:16:50.584371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.584405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.596964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.596998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.609846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.609906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.622233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.622267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.636106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.636135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.648558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.648594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.662037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.662066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.674854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.674905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.688404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.688438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.092 [2024-07-16 00:16:50.701386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.092 [2024-07-16 00:16:50.701415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.714667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.714716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.727788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.727816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.740886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.740913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.755942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.755971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.765565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.765606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 00:12:59.353 Latency(us) 00:12:59.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.353 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:59.353 Nvme1n1 : 5.01 9726.52 75.99 0.00 0.00 13142.81 4951.61 29127.11 00:12:59.353 =================================================================================================================== 00:12:59.353 Total : 9726.52 75.99 0.00 0.00 13142.81 4951.61 29127.11 00:12:59.353 [2024-07-16 00:16:50.771429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.771454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.779447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.779471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.787465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.787488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.795561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.795614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.803580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.803631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.811597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.811644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.819613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.819676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.827638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.827684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.835664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.835709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.843691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.843744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.851704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.851753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.859744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.859795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.867761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.867813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.875782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.875831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.883789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.883835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.891816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.891862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.899846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.899909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.907801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.907822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.915818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.915839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.923836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.923857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.931860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.931902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.939941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.939978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.353 [2024-07-16 00:16:50.947980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.353 [2024-07-16 00:16:50.948029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.354 [2024-07-16 00:16:50.955996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.354 [2024-07-16 00:16:50.956040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.354 [2024-07-16 00:16:50.963978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.354 [2024-07-16 00:16:50.964002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.614 [2024-07-16 00:16:50.971999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.614 [2024-07-16 00:16:50.972022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.614 [2024-07-16 00:16:50.980003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.614 [2024-07-16 00:16:50.980025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.614 [2024-07-16 00:16:50.988026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.614 [2024-07-16 00:16:50.988048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.614 [2024-07-16 00:16:50.996108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.614 [2024-07-16 00:16:50.996150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.614 [2024-07-16 00:16:51.004131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.614 [2024-07-16 00:16:51.004183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.614 [2024-07-16 00:16:51.012139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.614 [2024-07-16 00:16:51.012194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.614 [2024-07-16 00:16:51.020111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.614 [2024-07-16 00:16:51.020133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.615 [2024-07-16 00:16:51.028133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.615 [2024-07-16 00:16:51.028168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2261578) - No such process 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2261578 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.615 delay0 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.615 00:16:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:59.615 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.615 [2024-07-16 00:16:51.145183] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:06.184 Initializing NVMe Controllers 00:13:06.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:06.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:06.184 Initialization complete. Launching workers. 00:13:06.184 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 88 00:13:06.184 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 375, failed to submit 33 00:13:06.184 success 175, unsuccess 200, failed 0 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.184 rmmod nvme_tcp 00:13:06.184 rmmod nvme_fabrics 00:13:06.184 rmmod nvme_keyring 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:06.184 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2260112 ']' 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2260112 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2260112 ']' 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2260112 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2260112 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2260112' 00:13:06.185 killing process with pid 2260112 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2260112 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2260112 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.185 00:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.720 00:16:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.720 00:13:08.720 real 0m28.499s 00:13:08.720 user 0m42.123s 00:13:08.720 sys 0m8.171s 00:13:08.720 00:16:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.720 00:16:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.720 ************************************ 00:13:08.720 END TEST nvmf_zcopy 00:13:08.720 ************************************ 00:13:08.720 00:16:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.720 00:16:59 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.720 00:16:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.720 00:16:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.720 00:16:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.720 ************************************ 00:13:08.720 START TEST nvmf_nmic 00:13:08.720 ************************************ 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.720 * Looking for test storage... 00:13:08.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.720 00:16:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.721 00:16:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:10.621 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:10.621 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.621 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:10.622 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:10.622 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:13:10.622 00:13:10.622 --- 10.0.0.2 ping statistics --- 00:13:10.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.622 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:13:10.622 00:13:10.622 --- 10.0.0.1 ping statistics --- 00:13:10.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.622 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.622 00:17:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2264916 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2264916 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2264916 ']' 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.622 00:17:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.622 [2024-07-16 00:17:02.069213] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:13:10.622 [2024-07-16 00:17:02.069298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.622 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.622 [2024-07-16 00:17:02.138776] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.881 [2024-07-16 00:17:02.260884] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.881 [2024-07-16 00:17:02.260955] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.881 [2024-07-16 00:17:02.260970] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.881 [2024-07-16 00:17:02.260984] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.881 [2024-07-16 00:17:02.260995] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.881 [2024-07-16 00:17:02.261052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.881 [2024-07-16 00:17:02.261107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.881 [2024-07-16 00:17:02.261152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.881 [2024-07-16 00:17:02.261155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.449 [2024-07-16 00:17:03.048061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.449 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 Malloc0 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 [2024-07-16 00:17:03.100082] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:11.709 test case1: single bdev can't be used in multiple subsystems 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 [2024-07-16 00:17:03.123937] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:11.709 [2024-07-16 00:17:03.123968] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:11.709 [2024-07-16 00:17:03.123983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.709 request: 00:13:11.709 { 00:13:11.709 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:11.709 "namespace": { 00:13:11.709 "bdev_name": "Malloc0", 00:13:11.709 "no_auto_visible": false 00:13:11.709 }, 00:13:11.709 "method": "nvmf_subsystem_add_ns", 00:13:11.709 "req_id": 1 00:13:11.709 } 00:13:11.709 Got JSON-RPC error response 00:13:11.709 response: 00:13:11.709 { 00:13:11.709 "code": -32602, 00:13:11.709 "message": "Invalid parameters" 00:13:11.709 } 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:11.709 Adding namespace failed - expected result. 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:11.709 test case2: host connect to nvmf target in multiple paths 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 [2024-07-16 00:17:03.132040] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:11.709 00:17:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.710 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.314 00:17:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:12.878 00:17:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.878 00:17:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.878 00:17:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.878 00:17:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:12.878 00:17:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:15.409 00:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:15.410 00:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:15.410 00:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.410 00:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:15.410 00:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.410 00:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:15.410 00:17:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:15.410 [global] 00:13:15.410 thread=1 00:13:15.410 invalidate=1 00:13:15.410 rw=write 00:13:15.410 time_based=1 00:13:15.410 runtime=1 00:13:15.410 ioengine=libaio 00:13:15.410 direct=1 00:13:15.410 bs=4096 00:13:15.410 iodepth=1 00:13:15.410 norandommap=0 00:13:15.410 numjobs=1 00:13:15.410 00:13:15.410 verify_dump=1 00:13:15.410 verify_backlog=512 00:13:15.410 verify_state_save=0 00:13:15.410 do_verify=1 00:13:15.410 verify=crc32c-intel 00:13:15.410 [job0] 00:13:15.410 filename=/dev/nvme0n1 00:13:15.410 Could not set queue depth (nvme0n1) 00:13:15.410 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:15.410 fio-3.35 00:13:15.410 Starting 1 thread 00:13:16.344 00:13:16.344 job0: (groupid=0, jobs=1): err= 0: pid=2265603: Tue Jul 16 00:17:07 2024 00:13:16.344 read: IOPS=1500, BW=6002KiB/s (6146kB/s)(6008KiB/1001msec) 00:13:16.344 slat (nsec): min=5612, max=66972, avg=13677.25, stdev=9650.86 00:13:16.344 clat (usec): min=289, max=675, avg=349.64, stdev=44.01 00:13:16.344 lat (usec): min=295, max=683, avg=363.32, stdev=50.75 00:13:16.344 clat percentiles (usec): 00:13:16.344 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:13:16.344 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:13:16.344 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 437], 00:13:16.344 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[ 619], 99.95th=[ 676], 00:13:16.344 | 99.99th=[ 676] 00:13:16.344 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:16.344 slat (usec): min=7, max=30786, avg=37.76, stdev=785.12 00:13:16.344 clat (usec): min=181, max=2790, avg=249.58, stdev=84.62 00:13:16.344 lat (usec): min=189, max=31305, avg=287.34, stdev=796.65 00:13:16.344 clat percentiles (usec): 00:13:16.344 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 206], 00:13:16.344 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:13:16.344 | 70.00th=[ 265], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 351], 00:13:16.344 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 519], 99.95th=[ 2802], 00:13:16.344 | 99.99th=[ 2802] 00:13:16.345 bw ( KiB/s): min= 8144, max= 8144, per=100.00%, avg=8144.00, stdev= 0.00, samples=1 00:13:16.345 iops : min= 2036, max= 2036, avg=2036.00, stdev= 0.00, samples=1 00:13:16.345 lat (usec) : 250=34.10%, 500=65.31%, 750=0.56% 00:13:16.345 lat (msec) : 4=0.03% 00:13:16.345 cpu : usr=2.90%, sys=4.50%, ctx=3042, majf=0, minf=2 00:13:16.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.345 issued rwts: total=1502,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.345 00:13:16.345 Run status group 0 (all jobs): 00:13:16.345 READ: bw=6002KiB/s (6146kB/s), 6002KiB/s-6002KiB/s (6146kB/s-6146kB/s), io=6008KiB (6152kB), run=1001-1001msec 00:13:16.345 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:13:16.345 00:13:16.345 Disk stats (read/write): 00:13:16.345 nvme0n1: ios=1239/1536, merge=0/0, ticks=1399/374, in_queue=1773, util=98.80% 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.345 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.345 rmmod nvme_tcp 00:13:16.605 rmmod nvme_fabrics 00:13:16.605 rmmod nvme_keyring 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2264916 ']' 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2264916 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2264916 ']' 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2264916 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.605 00:17:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2264916 00:13:16.605 00:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:16.605 00:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:16.605 00:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2264916' 00:13:16.605 killing process with pid 2264916 00:13:16.606 00:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2264916 00:13:16.606 00:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2264916 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.864 00:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.773 00:17:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.773 00:13:18.773 real 0m10.572s 00:13:18.773 user 0m25.009s 00:13:18.773 sys 0m2.467s 00:13:18.773 00:17:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.773 00:17:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.773 ************************************ 00:13:18.773 END TEST nvmf_nmic 00:13:18.773 ************************************ 00:13:19.032 00:17:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:19.032 00:17:10 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:19.032 00:17:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.032 00:17:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.032 00:17:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.032 ************************************ 00:13:19.032 START TEST nvmf_fio_target 00:13:19.032 ************************************ 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:19.032 * Looking for test storage... 00:13:19.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.032 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.033 00:17:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.938 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.938 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.939 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.939 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:13:21.197 00:13:21.197 --- 10.0.0.2 ping statistics --- 00:13:21.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.197 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:13:21.197 00:13:21.197 --- 10.0.0.1 ping statistics --- 00:13:21.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.197 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2267679 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2267679 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2267679 ']' 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.197 00:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.197 [2024-07-16 00:17:12.712629] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:13:21.197 [2024-07-16 00:17:12.712708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.197 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.197 [2024-07-16 00:17:12.783112] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.457 [2024-07-16 00:17:12.904617] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.457 [2024-07-16 00:17:12.904690] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.457 [2024-07-16 00:17:12.904706] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.457 [2024-07-16 00:17:12.904719] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.457 [2024-07-16 00:17:12.904731] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.457 [2024-07-16 00:17:12.904820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.457 [2024-07-16 00:17:12.904874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.457 [2024-07-16 00:17:12.904940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.457 [2024-07-16 00:17:12.904943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:22.390 [2024-07-16 00:17:13.891656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.390 00:17:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.647 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:22.647 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.904 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:22.904 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.162 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:23.162 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.420 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:23.420 00:17:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:23.678 00:17:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.936 00:17:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:23.936 00:17:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.193 00:17:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:24.193 00:17:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.450 00:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:24.450 00:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:24.708 00:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.965 00:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.965 00:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:25.222 00:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:25.222 00:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.479 00:17:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.736 [2024-07-16 00:17:17.218993] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.736 00:17:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:25.992 00:17:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:26.250 00:17:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.822 00:17:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:26.822 00:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.822 00:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.822 00:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:26.822 00:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:26.822 00:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:28.748 00:17:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:28.748 00:17:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:28.748 00:17:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.748 00:17:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:28.748 00:17:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.748 00:17:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:28.748 00:17:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:29.005 [global] 00:13:29.006 thread=1 00:13:29.006 invalidate=1 00:13:29.006 rw=write 00:13:29.006 time_based=1 00:13:29.006 runtime=1 00:13:29.006 ioengine=libaio 00:13:29.006 direct=1 00:13:29.006 bs=4096 00:13:29.006 iodepth=1 00:13:29.006 norandommap=0 00:13:29.006 numjobs=1 00:13:29.006 00:13:29.006 verify_dump=1 00:13:29.006 verify_backlog=512 00:13:29.006 verify_state_save=0 00:13:29.006 do_verify=1 00:13:29.006 verify=crc32c-intel 00:13:29.006 [job0] 00:13:29.006 filename=/dev/nvme0n1 00:13:29.006 [job1] 00:13:29.006 filename=/dev/nvme0n2 00:13:29.006 [job2] 00:13:29.006 filename=/dev/nvme0n3 00:13:29.006 [job3] 00:13:29.006 filename=/dev/nvme0n4 00:13:29.006 Could not set queue depth (nvme0n1) 00:13:29.006 Could not set queue depth (nvme0n2) 00:13:29.006 Could not set queue depth (nvme0n3) 00:13:29.006 Could not set queue depth (nvme0n4) 00:13:29.006 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.006 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.006 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.006 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.006 fio-3.35 00:13:29.006 Starting 4 threads 00:13:30.380 00:13:30.380 job0: (groupid=0, jobs=1): err= 0: pid=2268756: Tue Jul 16 00:17:21 2024 00:13:30.380 read: IOPS=1295, BW=5183KiB/s (5307kB/s)(5188KiB/1001msec) 00:13:30.380 slat (nsec): min=6130, max=67800, avg=13202.73, stdev=7715.54 00:13:30.380 clat (usec): min=309, max=838, avg=434.25, stdev=85.47 00:13:30.380 lat (usec): min=317, max=847, avg=447.45, stdev=88.44 00:13:30.380 clat percentiles (usec): 00:13:30.380 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 343], 00:13:30.380 | 30.00th=[ 359], 40.00th=[ 388], 50.00th=[ 433], 60.00th=[ 474], 00:13:30.380 | 70.00th=[ 498], 80.00th=[ 519], 90.00th=[ 545], 95.00th=[ 562], 00:13:30.380 | 99.00th=[ 594], 99.50th=[ 635], 99.90th=[ 791], 99.95th=[ 840], 00:13:30.380 | 99.99th=[ 840] 00:13:30.380 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:30.380 slat (nsec): min=7613, max=76336, avg=14892.84, stdev=7846.04 00:13:30.380 clat (usec): min=192, max=502, avg=250.84, stdev=41.08 00:13:30.380 lat (usec): min=202, max=510, avg=265.74, stdev=44.54 00:13:30.380 clat percentiles (usec): 00:13:30.380 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:13:30.380 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:13:30.380 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 326], 00:13:30.380 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 465], 99.95th=[ 502], 00:13:30.380 | 99.99th=[ 502] 00:13:30.380 bw ( KiB/s): min= 8192, max= 8192, per=54.12%, avg=8192.00, stdev= 0.00, samples=1 00:13:30.381 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:30.381 lat (usec) : 250=31.20%, 500=55.56%, 750=13.17%, 1000=0.07% 00:13:30.381 cpu : usr=3.10%, sys=5.40%, ctx=2834, majf=0, minf=1 00:13:30.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 issued rwts: total=1297,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.381 job1: (groupid=0, jobs=1): err= 0: pid=2268757: Tue Jul 16 00:17:21 2024 00:13:30.381 read: IOPS=443, BW=1775KiB/s (1817kB/s)(1796KiB/1012msec) 00:13:30.381 slat (nsec): min=6847, max=33396, avg=12417.87, stdev=2904.05 00:13:30.381 clat (usec): min=399, max=41007, avg=1865.70, stdev=7278.04 00:13:30.381 lat (usec): min=407, max=41024, avg=1878.12, stdev=7279.09 00:13:30.381 clat percentiles (usec): 00:13:30.381 | 1.00th=[ 408], 5.00th=[ 437], 10.00th=[ 445], 20.00th=[ 486], 00:13:30.381 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[ 519], 60.00th=[ 529], 00:13:30.381 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 611], 00:13:30.381 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:30.381 | 99.99th=[41157] 00:13:30.381 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:13:30.381 slat (nsec): min=6545, max=38228, avg=11981.19, stdev=5022.36 00:13:30.381 clat (usec): min=202, max=828, avg=309.30, stdev=73.10 00:13:30.381 lat (usec): min=210, max=842, avg=321.28, stdev=74.35 00:13:30.381 clat percentiles (usec): 00:13:30.381 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 260], 00:13:30.381 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 306], 00:13:30.381 | 70.00th=[ 322], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 445], 00:13:30.381 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 832], 99.95th=[ 832], 00:13:30.381 | 99.99th=[ 832] 00:13:30.381 bw ( KiB/s): min= 4096, max= 4096, per=27.06%, avg=4096.00, stdev= 0.00, samples=1 00:13:30.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:30.381 lat (usec) : 250=7.60%, 500=58.38%, 750=32.26%, 1000=0.21% 00:13:30.381 lat (msec) : 50=1.56% 00:13:30.381 cpu : usr=0.49%, sys=1.29%, ctx=962, majf=0, minf=1 00:13:30.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 issued rwts: total=449,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.381 job2: (groupid=0, jobs=1): err= 0: pid=2268758: Tue Jul 16 00:17:21 2024 00:13:30.381 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:30.381 slat (nsec): min=6412, max=64940, avg=15688.32, stdev=8415.20 00:13:30.381 clat (usec): min=415, max=40972, avg=560.88, stdev=1265.18 00:13:30.381 lat (usec): min=433, max=40979, avg=576.57, stdev=1264.89 00:13:30.381 clat percentiles (usec): 00:13:30.381 | 1.00th=[ 437], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 482], 00:13:30.381 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 515], 60.00th=[ 523], 00:13:30.381 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 619], 00:13:30.381 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 758], 99.95th=[41157], 00:13:30.381 | 99.99th=[41157] 00:13:30.381 write: IOPS=1309, BW=5239KiB/s (5364kB/s)(5244KiB/1001msec); 0 zone resets 00:13:30.381 slat (usec): min=8, max=13185, avg=26.57, stdev=363.79 00:13:30.381 clat (usec): min=207, max=518, avg=276.00, stdev=51.54 00:13:30.381 lat (usec): min=220, max=13466, avg=302.57, stdev=367.71 00:13:30.381 clat percentiles (usec): 00:13:30.381 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 243], 00:13:30.381 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:13:30.381 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 351], 95.00th=[ 396], 00:13:30.381 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 515], 99.95th=[ 519], 00:13:30.381 | 99.99th=[ 519] 00:13:30.381 bw ( KiB/s): min= 4224, max= 4224, per=27.91%, avg=4224.00, stdev= 0.00, samples=1 00:13:30.381 iops : min= 1056, max= 1056, avg=1056.00, stdev= 0.00, samples=1 00:13:30.381 lat (usec) : 250=18.76%, 500=52.46%, 750=28.69%, 1000=0.04% 00:13:30.381 lat (msec) : 50=0.04% 00:13:30.381 cpu : usr=2.60%, sys=5.10%, ctx=2339, majf=0, minf=1 00:13:30.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 issued rwts: total=1024,1311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.381 job3: (groupid=0, jobs=1): err= 0: pid=2268760: Tue Jul 16 00:17:21 2024 00:13:30.381 read: IOPS=350, BW=1404KiB/s (1437kB/s)(1436KiB/1023msec) 00:13:30.381 slat (nsec): min=6881, max=36767, avg=13689.36, stdev=3352.51 00:13:30.381 clat (usec): min=421, max=41013, avg=2309.49, stdev=8357.87 00:13:30.381 lat (usec): min=434, max=41032, avg=2323.18, stdev=8359.45 00:13:30.381 clat percentiles (usec): 00:13:30.381 | 1.00th=[ 433], 5.00th=[ 453], 10.00th=[ 465], 20.00th=[ 482], 00:13:30.381 | 30.00th=[ 494], 40.00th=[ 502], 50.00th=[ 510], 60.00th=[ 519], 00:13:30.381 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 668], 00:13:30.381 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:30.381 | 99.99th=[41157] 00:13:30.381 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:13:30.381 slat (usec): min=7, max=1967, avg=17.65, stdev=86.50 00:13:30.381 clat (usec): min=234, max=866, avg=337.97, stdev=68.47 00:13:30.381 lat (usec): min=253, max=2263, avg=355.61, stdev=109.19 00:13:30.381 clat percentiles (usec): 00:13:30.381 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:13:30.381 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 338], 00:13:30.381 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 453], 00:13:30.381 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 865], 99.95th=[ 865], 00:13:30.381 | 99.99th=[ 865] 00:13:30.381 bw ( KiB/s): min= 4096, max= 4096, per=27.06%, avg=4096.00, stdev= 0.00, samples=1 00:13:30.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:30.381 lat (usec) : 250=0.80%, 500=73.02%, 750=24.00%, 1000=0.34% 00:13:30.381 lat (msec) : 50=1.84% 00:13:30.381 cpu : usr=0.20%, sys=1.57%, ctx=874, majf=0, minf=2 00:13:30.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.381 issued rwts: total=359,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.381 00:13:30.381 Run status group 0 (all jobs): 00:13:30.381 READ: bw=11.9MiB/s (12.5MB/s), 1404KiB/s-5183KiB/s (1437kB/s-5307kB/s), io=12.2MiB (12.8MB), run=1001-1023msec 00:13:30.381 WRITE: bw=14.8MiB/s (15.5MB/s), 2002KiB/s-6138KiB/s (2050kB/s-6285kB/s), io=15.1MiB (15.9MB), run=1001-1023msec 00:13:30.381 00:13:30.381 Disk stats (read/write): 00:13:30.381 nvme0n1: ios=1074/1495, merge=0/0, ticks=465/360, in_queue=825, util=87.27% 00:13:30.381 nvme0n2: ios=495/512, merge=0/0, ticks=757/152, in_queue=909, util=91.23% 00:13:30.381 nvme0n3: ios=950/1024, merge=0/0, ticks=794/267, in_queue=1061, util=94.66% 00:13:30.381 nvme0n4: ios=423/512, merge=0/0, ticks=1084/173, in_queue=1257, util=96.52% 00:13:30.381 00:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:30.381 [global] 00:13:30.381 thread=1 00:13:30.381 invalidate=1 00:13:30.381 rw=randwrite 00:13:30.381 time_based=1 00:13:30.381 runtime=1 00:13:30.381 ioengine=libaio 00:13:30.381 direct=1 00:13:30.381 bs=4096 00:13:30.381 iodepth=1 00:13:30.381 norandommap=0 00:13:30.381 numjobs=1 00:13:30.381 00:13:30.381 verify_dump=1 00:13:30.381 verify_backlog=512 00:13:30.381 verify_state_save=0 00:13:30.381 do_verify=1 00:13:30.381 verify=crc32c-intel 00:13:30.381 [job0] 00:13:30.381 filename=/dev/nvme0n1 00:13:30.381 [job1] 00:13:30.381 filename=/dev/nvme0n2 00:13:30.381 [job2] 00:13:30.381 filename=/dev/nvme0n3 00:13:30.381 [job3] 00:13:30.381 filename=/dev/nvme0n4 00:13:30.381 Could not set queue depth (nvme0n1) 00:13:30.381 Could not set queue depth (nvme0n2) 00:13:30.381 Could not set queue depth (nvme0n3) 00:13:30.381 Could not set queue depth (nvme0n4) 00:13:30.639 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.639 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.639 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.639 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.639 fio-3.35 00:13:30.639 Starting 4 threads 00:13:32.018 00:13:32.018 job0: (groupid=0, jobs=1): err= 0: pid=2268988: Tue Jul 16 00:17:23 2024 00:13:32.018 read: IOPS=1125, BW=4503KiB/s (4612kB/s)(4508KiB/1001msec) 00:13:32.018 slat (nsec): min=5799, max=56084, avg=15597.75, stdev=9479.39 00:13:32.018 clat (usec): min=296, max=40770, avg=477.33, stdev=1274.87 00:13:32.018 lat (usec): min=305, max=40783, avg=492.92, stdev=1274.96 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 347], 00:13:32.018 | 30.00th=[ 367], 40.00th=[ 396], 50.00th=[ 416], 60.00th=[ 441], 00:13:32.018 | 70.00th=[ 469], 80.00th=[ 498], 90.00th=[ 545], 95.00th=[ 562], 00:13:32.018 | 99.00th=[ 709], 99.50th=[ 1205], 99.90th=[14353], 99.95th=[40633], 00:13:32.018 | 99.99th=[40633] 00:13:32.018 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:32.018 slat (nsec): min=6154, max=74505, avg=17237.09, stdev=9580.20 00:13:32.018 clat (usec): min=193, max=2345, avg=264.42, stdev=77.42 00:13:32.018 lat (usec): min=201, max=2359, avg=281.66, stdev=79.54 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:13:32.018 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:13:32.018 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 338], 95.00th=[ 379], 00:13:32.018 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 1156], 99.95th=[ 2343], 00:13:32.018 | 99.99th=[ 2343] 00:13:32.018 bw ( KiB/s): min= 7096, max= 7096, per=34.68%, avg=7096.00, stdev= 0.00, samples=1 00:13:32.018 iops : min= 1774, max= 1774, avg=1774.00, stdev= 0.00, samples=1 00:13:32.018 lat (usec) : 250=31.73%, 500=60.20%, 750=7.51%, 1000=0.19% 00:13:32.018 lat (msec) : 2=0.26%, 4=0.04%, 20=0.04%, 50=0.04% 00:13:32.018 cpu : usr=2.90%, sys=6.00%, ctx=2664, majf=0, minf=1 00:13:32.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 issued rwts: total=1127,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.018 job1: (groupid=0, jobs=1): err= 0: pid=2268990: Tue Jul 16 00:17:23 2024 00:13:32.018 read: IOPS=1066, BW=4268KiB/s (4370kB/s)(4272KiB/1001msec) 00:13:32.018 slat (nsec): min=5400, max=60531, avg=21644.71, stdev=10742.69 00:13:32.018 clat (usec): min=330, max=1490, avg=481.28, stdev=71.97 00:13:32.018 lat (usec): min=336, max=1504, avg=502.93, stdev=72.91 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 343], 5.00th=[ 379], 10.00th=[ 396], 20.00th=[ 437], 00:13:32.018 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 490], 00:13:32.018 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 586], 00:13:32.018 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 971], 99.95th=[ 1483], 00:13:32.018 | 99.99th=[ 1483] 00:13:32.018 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:32.018 slat (nsec): min=8094, max=95727, avg=18056.94, stdev=9284.12 00:13:32.018 clat (usec): min=196, max=488, avg=274.04, stdev=55.68 00:13:32.018 lat (usec): min=215, max=500, avg=292.10, stdev=57.79 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 229], 00:13:32.018 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 273], 00:13:32.018 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 363], 95.00th=[ 404], 00:13:32.018 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 478], 99.95th=[ 490], 00:13:32.018 | 99.99th=[ 490] 00:13:32.018 bw ( KiB/s): min= 7000, max= 7000, per=34.21%, avg=7000.00, stdev= 0.00, samples=1 00:13:32.018 iops : min= 1750, max= 1750, avg=1750.00, stdev= 0.00, samples=1 00:13:32.018 lat (usec) : 250=22.77%, 500=63.44%, 750=13.63%, 1000=0.12% 00:13:32.018 lat (msec) : 2=0.04% 00:13:32.018 cpu : usr=2.60%, sys=5.40%, ctx=2607, majf=0, minf=1 00:13:32.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 issued rwts: total=1068,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.018 job2: (groupid=0, jobs=1): err= 0: pid=2268991: Tue Jul 16 00:17:23 2024 00:13:32.018 read: IOPS=294, BW=1179KiB/s (1207kB/s)(1180KiB/1001msec) 00:13:32.018 slat (nsec): min=6260, max=40048, avg=9693.21, stdev=6035.91 00:13:32.018 clat (usec): min=330, max=41987, avg=2890.36, stdev=9746.03 00:13:32.018 lat (usec): min=337, max=42023, avg=2900.06, stdev=9750.62 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:13:32.018 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 392], 00:13:32.018 | 70.00th=[ 465], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[41157], 00:13:32.018 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:32.018 | 99.99th=[42206] 00:13:32.018 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:13:32.018 slat (nsec): min=8072, max=31764, avg=9479.47, stdev=2720.51 00:13:32.018 clat (usec): min=212, max=484, avg=269.42, stdev=24.87 00:13:32.018 lat (usec): min=222, max=516, avg=278.90, stdev=25.65 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 249], 00:13:32.018 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:13:32.018 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 289], 95.00th=[ 297], 00:13:32.018 | 99.00th=[ 326], 99.50th=[ 392], 99.90th=[ 486], 99.95th=[ 486], 00:13:32.018 | 99.99th=[ 486] 00:13:32.018 bw ( KiB/s): min= 4096, max= 4096, per=20.02%, avg=4096.00, stdev= 0.00, samples=1 00:13:32.018 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:32.018 lat (usec) : 250=12.89%, 500=77.57%, 750=7.19%, 1000=0.12% 00:13:32.018 lat (msec) : 50=2.23% 00:13:32.018 cpu : usr=0.70%, sys=0.90%, ctx=807, majf=0, minf=2 00:13:32.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 issued rwts: total=295,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.018 job3: (groupid=0, jobs=1): err= 0: pid=2268992: Tue Jul 16 00:17:23 2024 00:13:32.018 read: IOPS=1076, BW=4308KiB/s (4411kB/s)(4312KiB/1001msec) 00:13:32.018 slat (nsec): min=6069, max=64421, avg=21295.10, stdev=10459.19 00:13:32.018 clat (usec): min=370, max=659, avg=462.34, stdev=39.24 00:13:32.018 lat (usec): min=379, max=672, avg=483.63, stdev=39.76 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 383], 5.00th=[ 404], 10.00th=[ 420], 20.00th=[ 437], 00:13:32.018 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 465], 00:13:32.018 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 545], 00:13:32.018 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 627], 99.95th=[ 660], 00:13:32.018 | 99.99th=[ 660] 00:13:32.018 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:32.018 slat (nsec): min=6041, max=72035, avg=18367.02, stdev=12210.84 00:13:32.018 clat (usec): min=210, max=662, avg=284.16, stdev=45.76 00:13:32.018 lat (usec): min=218, max=681, avg=302.52, stdev=54.09 00:13:32.018 clat percentiles (usec): 00:13:32.018 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 239], 00:13:32.018 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:13:32.018 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 351], 00:13:32.018 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 644], 99.95th=[ 660], 00:13:32.018 | 99.99th=[ 660] 00:13:32.018 bw ( KiB/s): min= 7048, max= 7048, per=34.45%, avg=7048.00, stdev= 0.00, samples=1 00:13:32.018 iops : min= 1762, max= 1762, avg=1762.00, stdev= 0.00, samples=1 00:13:32.018 lat (usec) : 250=15.95%, 500=77.66%, 750=6.39% 00:13:32.018 cpu : usr=2.70%, sys=5.40%, ctx=2614, majf=0, minf=1 00:13:32.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.018 issued rwts: total=1078,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.018 00:13:32.018 Run status group 0 (all jobs): 00:13:32.018 READ: bw=13.9MiB/s (14.6MB/s), 1179KiB/s-4503KiB/s (1207kB/s-4612kB/s), io=13.9MiB (14.6MB), run=1001-1001msec 00:13:32.018 WRITE: bw=20.0MiB/s (20.9MB/s), 2046KiB/s-6138KiB/s (2095kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1001msec 00:13:32.018 00:13:32.018 Disk stats (read/write): 00:13:32.018 nvme0n1: ios=1076/1084, merge=0/0, ticks=901/271, in_queue=1172, util=98.30% 00:13:32.018 nvme0n2: ios=1071/1049, merge=0/0, ticks=1484/265, in_queue=1749, util=98.46% 00:13:32.018 nvme0n3: ios=290/512, merge=0/0, ticks=647/132, in_queue=779, util=87.89% 00:13:32.018 nvme0n4: ios=1037/1062, merge=0/0, ticks=752/285, in_queue=1037, util=91.44% 00:13:32.018 00:17:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:32.018 [global] 00:13:32.018 thread=1 00:13:32.018 invalidate=1 00:13:32.018 rw=write 00:13:32.018 time_based=1 00:13:32.018 runtime=1 00:13:32.018 ioengine=libaio 00:13:32.018 direct=1 00:13:32.018 bs=4096 00:13:32.018 iodepth=128 00:13:32.018 norandommap=0 00:13:32.018 numjobs=1 00:13:32.018 00:13:32.018 verify_dump=1 00:13:32.018 verify_backlog=512 00:13:32.018 verify_state_save=0 00:13:32.018 do_verify=1 00:13:32.018 verify=crc32c-intel 00:13:32.018 [job0] 00:13:32.018 filename=/dev/nvme0n1 00:13:32.018 [job1] 00:13:32.018 filename=/dev/nvme0n2 00:13:32.018 [job2] 00:13:32.018 filename=/dev/nvme0n3 00:13:32.018 [job3] 00:13:32.018 filename=/dev/nvme0n4 00:13:32.018 Could not set queue depth (nvme0n1) 00:13:32.018 Could not set queue depth (nvme0n2) 00:13:32.018 Could not set queue depth (nvme0n3) 00:13:32.018 Could not set queue depth (nvme0n4) 00:13:32.018 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.018 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.018 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.018 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.018 fio-3.35 00:13:32.018 Starting 4 threads 00:13:33.395 00:13:33.395 job0: (groupid=0, jobs=1): err= 0: pid=2269291: Tue Jul 16 00:17:24 2024 00:13:33.395 read: IOPS=4985, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1002msec) 00:13:33.395 slat (usec): min=2, max=23008, avg=96.09, stdev=625.62 00:13:33.395 clat (usec): min=845, max=50838, avg=12150.47, stdev=4065.97 00:13:33.395 lat (usec): min=3831, max=50847, avg=12246.56, stdev=4103.91 00:13:33.395 clat percentiles (usec): 00:13:33.395 | 1.00th=[ 7242], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10683], 00:13:33.395 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:13:33.395 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13960], 95.00th=[15008], 00:13:33.395 | 99.00th=[31065], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:13:33.395 | 99.99th=[50594] 00:13:33.396 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:13:33.396 slat (usec): min=3, max=16345, avg=92.68, stdev=529.56 00:13:33.396 clat (usec): min=6670, max=52781, avg=12877.02, stdev=5220.35 00:13:33.396 lat (usec): min=6675, max=52788, avg=12969.70, stdev=5231.09 00:13:33.396 clat percentiles (usec): 00:13:33.396 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10945], 00:13:33.396 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:13:33.396 | 70.00th=[12518], 80.00th=[13304], 90.00th=[13960], 95.00th=[15533], 00:13:33.396 | 99.00th=[46924], 99.50th=[46924], 99.90th=[52691], 99.95th=[52691], 00:13:33.396 | 99.99th=[52691] 00:13:33.396 bw ( KiB/s): min=20120, max=20840, per=31.38%, avg=20480.00, stdev=509.12, samples=2 00:13:33.396 iops : min= 5030, max= 5210, avg=5120.00, stdev=127.28, samples=2 00:13:33.396 lat (usec) : 1000=0.01% 00:13:33.396 lat (msec) : 4=0.15%, 10=8.84%, 20=88.06%, 50=2.34%, 100=0.60% 00:13:33.396 cpu : usr=5.39%, sys=9.29%, ctx=447, majf=0, minf=1 00:13:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.396 issued rwts: total=4995,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.396 job1: (groupid=0, jobs=1): err= 0: pid=2269306: Tue Jul 16 00:17:24 2024 00:13:33.396 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1020msec) 00:13:33.396 slat (usec): min=2, max=21405, avg=127.92, stdev=824.59 00:13:33.396 clat (usec): min=4628, max=40739, avg=17022.06, stdev=5427.89 00:13:33.396 lat (usec): min=4634, max=40752, avg=17149.98, stdev=5467.31 00:13:33.396 clat percentiles (usec): 00:13:33.396 | 1.00th=[ 7177], 5.00th=[10814], 10.00th=[11994], 20.00th=[13698], 00:13:33.396 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[16188], 00:13:33.396 | 70.00th=[17957], 80.00th=[20579], 90.00th=[23987], 95.00th=[27395], 00:13:33.396 | 99.00th=[34341], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:13:33.396 | 99.99th=[40633] 00:13:33.396 write: IOPS=3854, BW=15.1MiB/s (15.8MB/s)(15.4MiB/1020msec); 0 zone resets 00:13:33.396 slat (usec): min=3, max=23822, avg=129.22, stdev=796.86 00:13:33.396 clat (usec): min=6147, max=51459, avg=17293.42, stdev=6937.32 00:13:33.396 lat (usec): min=6152, max=51465, avg=17422.64, stdev=6984.93 00:13:33.396 clat percentiles (usec): 00:13:33.396 | 1.00th=[ 7832], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[12256], 00:13:33.396 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14877], 60.00th=[16450], 00:13:33.396 | 70.00th=[19792], 80.00th=[22938], 90.00th=[25560], 95.00th=[30802], 00:13:33.396 | 99.00th=[42730], 99.50th=[44827], 99.90th=[51643], 99.95th=[51643], 00:13:33.396 | 99.99th=[51643] 00:13:33.396 bw ( KiB/s): min=13776, max=16656, per=23.32%, avg=15216.00, stdev=2036.47, samples=2 00:13:33.396 iops : min= 3444, max= 4164, avg=3804.00, stdev=509.12, samples=2 00:13:33.396 lat (msec) : 10=5.08%, 20=69.62%, 50=25.21%, 100=0.08% 00:13:33.396 cpu : usr=2.55%, sys=5.59%, ctx=411, majf=0, minf=1 00:13:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.396 issued rwts: total=3584,3932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.396 job2: (groupid=0, jobs=1): err= 0: pid=2269344: Tue Jul 16 00:17:24 2024 00:13:33.396 read: IOPS=3244, BW=12.7MiB/s (13.3MB/s)(13.2MiB/1043msec) 00:13:33.396 slat (usec): min=2, max=14352, avg=131.27, stdev=767.02 00:13:33.396 clat (usec): min=10376, max=59672, avg=18543.17, stdev=10142.32 00:13:33.396 lat (usec): min=10837, max=59683, avg=18674.45, stdev=10179.70 00:13:33.396 clat percentiles (usec): 00:13:33.396 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12256], 20.00th=[12911], 00:13:33.396 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14877], 60.00th=[15664], 00:13:33.396 | 70.00th=[16450], 80.00th=[18744], 90.00th=[34341], 95.00th=[45351], 00:13:33.396 | 99.00th=[53216], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:13:33.396 | 99.99th=[59507] 00:13:33.396 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:13:33.396 slat (usec): min=3, max=18049, avg=145.36, stdev=782.79 00:13:33.396 clat (usec): min=6789, max=65877, avg=19113.92, stdev=9480.01 00:13:33.396 lat (usec): min=6795, max=65885, avg=19259.28, stdev=9517.84 00:13:33.396 clat percentiles (usec): 00:13:33.396 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11731], 20.00th=[13042], 00:13:33.396 | 30.00th=[13304], 40.00th=[13960], 50.00th=[15008], 60.00th=[17695], 00:13:33.396 | 70.00th=[19792], 80.00th=[24773], 90.00th=[30540], 95.00th=[39060], 00:13:33.396 | 99.00th=[55837], 99.50th=[60031], 99.90th=[60556], 99.95th=[61604], 00:13:33.396 | 99.99th=[65799] 00:13:33.396 bw ( KiB/s): min=12288, max=16384, per=21.97%, avg=14336.00, stdev=2896.31, samples=2 00:13:33.396 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:13:33.396 lat (msec) : 10=0.50%, 20=76.02%, 50=20.71%, 100=2.77% 00:13:33.396 cpu : usr=3.93%, sys=7.20%, ctx=378, majf=0, minf=1 00:13:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.396 issued rwts: total=3384,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.396 job3: (groupid=0, jobs=1): err= 0: pid=2269345: Tue Jul 16 00:17:24 2024 00:13:33.396 read: IOPS=3980, BW=15.5MiB/s (16.3MB/s)(16.0MiB/1029msec) 00:13:33.396 slat (usec): min=2, max=13626, avg=113.20, stdev=879.65 00:13:33.396 clat (usec): min=1805, max=34085, avg=15538.07, stdev=4632.92 00:13:33.396 lat (usec): min=1823, max=34097, avg=15651.27, stdev=4679.49 00:13:33.396 clat percentiles (usec): 00:13:33.396 | 1.00th=[ 5735], 5.00th=[10814], 10.00th=[11731], 20.00th=[12125], 00:13:33.396 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14222], 60.00th=[14746], 00:13:33.396 | 70.00th=[17171], 80.00th=[19530], 90.00th=[21890], 95.00th=[23725], 00:13:33.396 | 99.00th=[28705], 99.50th=[31589], 99.90th=[33817], 99.95th=[34341], 00:13:33.396 | 99.99th=[34341] 00:13:33.396 write: IOPS=4256, BW=16.6MiB/s (17.4MB/s)(17.1MiB/1029msec); 0 zone resets 00:13:33.396 slat (usec): min=3, max=17148, avg=99.16, stdev=744.30 00:13:33.396 clat (usec): min=965, max=53753, avg=15277.74, stdev=8176.65 00:13:33.396 lat (usec): min=1210, max=53761, avg=15376.90, stdev=8211.18 00:13:33.396 clat percentiles (usec): 00:13:33.396 | 1.00th=[ 3523], 5.00th=[ 5669], 10.00th=[ 7504], 20.00th=[ 9241], 00:13:33.396 | 30.00th=[10814], 40.00th=[12256], 50.00th=[13960], 60.00th=[14877], 00:13:33.396 | 70.00th=[16057], 80.00th=[21627], 90.00th=[23725], 95.00th=[30802], 00:13:33.396 | 99.00th=[49021], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:13:33.396 | 99.99th=[53740] 00:13:33.396 bw ( KiB/s): min=13792, max=20232, per=26.07%, avg=17012.00, stdev=4553.77, samples=2 00:13:33.396 iops : min= 3448, max= 5058, avg=4253.00, stdev=1138.44, samples=2 00:13:33.396 lat (usec) : 1000=0.01% 00:13:33.396 lat (msec) : 2=0.29%, 4=0.71%, 10=13.58%, 20=65.04%, 50=20.09% 00:13:33.396 lat (msec) : 100=0.27% 00:13:33.396 cpu : usr=2.43%, sys=6.13%, ctx=467, majf=0, minf=1 00:13:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.396 issued rwts: total=4096,4380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.396 00:13:33.396 Run status group 0 (all jobs): 00:13:33.396 READ: bw=60.1MiB/s (63.1MB/s), 12.7MiB/s-19.5MiB/s (13.3MB/s-20.4MB/s), io=62.7MiB (65.8MB), run=1002-1043msec 00:13:33.396 WRITE: bw=63.7MiB/s (66.8MB/s), 13.4MiB/s-20.0MiB/s (14.1MB/s-20.9MB/s), io=66.5MiB (69.7MB), run=1002-1043msec 00:13:33.396 00:13:33.396 Disk stats (read/write): 00:13:33.396 nvme0n1: ios=4146/4250, merge=0/0, ticks=19458/18774, in_queue=38232, util=84.87% 00:13:33.396 nvme0n2: ios=3122/3247, merge=0/0, ticks=29444/29070, in_queue=58514, util=97.46% 00:13:33.396 nvme0n3: ios=2654/3072, merge=0/0, ticks=12493/15530, in_queue=28023, util=100.00% 00:13:33.396 nvme0n4: ios=3629/3735, merge=0/0, ticks=54149/45512, in_queue=99661, util=95.97% 00:13:33.396 00:17:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:33.396 [global] 00:13:33.396 thread=1 00:13:33.396 invalidate=1 00:13:33.396 rw=randwrite 00:13:33.396 time_based=1 00:13:33.396 runtime=1 00:13:33.396 ioengine=libaio 00:13:33.396 direct=1 00:13:33.396 bs=4096 00:13:33.396 iodepth=128 00:13:33.396 norandommap=0 00:13:33.396 numjobs=1 00:13:33.396 00:13:33.396 verify_dump=1 00:13:33.396 verify_backlog=512 00:13:33.396 verify_state_save=0 00:13:33.396 do_verify=1 00:13:33.396 verify=crc32c-intel 00:13:33.396 [job0] 00:13:33.396 filename=/dev/nvme0n1 00:13:33.396 [job1] 00:13:33.396 filename=/dev/nvme0n2 00:13:33.396 [job2] 00:13:33.396 filename=/dev/nvme0n3 00:13:33.396 [job3] 00:13:33.396 filename=/dev/nvme0n4 00:13:33.396 Could not set queue depth (nvme0n1) 00:13:33.396 Could not set queue depth (nvme0n2) 00:13:33.396 Could not set queue depth (nvme0n3) 00:13:33.396 Could not set queue depth (nvme0n4) 00:13:33.654 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.654 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.654 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.654 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.654 fio-3.35 00:13:33.654 Starting 4 threads 00:13:35.030 00:13:35.030 job0: (groupid=0, jobs=1): err= 0: pid=2269569: Tue Jul 16 00:17:26 2024 00:13:35.030 read: IOPS=3672, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1004msec) 00:13:35.030 slat (usec): min=2, max=7528, avg=128.95, stdev=671.04 00:13:35.030 clat (usec): min=878, max=47747, avg=15968.90, stdev=5939.85 00:13:35.030 lat (usec): min=5847, max=51611, avg=16097.84, stdev=5976.47 00:13:35.030 clat percentiles (usec): 00:13:35.030 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[11731], 20.00th=[12649], 00:13:35.030 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[14091], 00:13:35.030 | 70.00th=[16319], 80.00th=[17957], 90.00th=[23987], 95.00th=[29492], 00:13:35.030 | 99.00th=[40109], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:13:35.030 | 99.99th=[47973] 00:13:35.030 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:13:35.030 slat (usec): min=3, max=17579, avg=120.28, stdev=772.90 00:13:35.030 clat (usec): min=6339, max=44574, avg=16663.80, stdev=7342.83 00:13:35.030 lat (usec): min=6343, max=44598, avg=16784.08, stdev=7384.00 00:13:35.030 clat percentiles (usec): 00:13:35.030 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[11207], 20.00th=[11863], 00:13:35.030 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13829], 00:13:35.030 | 70.00th=[17695], 80.00th=[22938], 90.00th=[29492], 95.00th=[34341], 00:13:35.030 | 99.00th=[35390], 99.50th=[35914], 99.90th=[38011], 99.95th=[38536], 00:13:35.030 | 99.99th=[44827] 00:13:35.030 bw ( KiB/s): min=16184, max=16384, per=22.14%, avg=16284.00, stdev=141.42, samples=2 00:13:35.030 iops : min= 4046, max= 4096, avg=4071.00, stdev=35.36, samples=2 00:13:35.030 lat (usec) : 1000=0.01% 00:13:35.030 lat (msec) : 10=3.31%, 20=75.41%, 50=21.26% 00:13:35.030 cpu : usr=4.59%, sys=6.38%, ctx=382, majf=0, minf=9 00:13:35.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:35.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:35.030 issued rwts: total=3687,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:35.030 job1: (groupid=0, jobs=1): err= 0: pid=2269570: Tue Jul 16 00:17:26 2024 00:13:35.030 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:13:35.030 slat (usec): min=3, max=3233, avg=94.71, stdev=418.36 00:13:35.030 clat (usec): min=8828, max=15602, avg=12461.88, stdev=1004.57 00:13:35.030 lat (usec): min=9323, max=15612, avg=12556.59, stdev=947.82 00:13:35.030 clat percentiles (usec): 00:13:35.030 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[11076], 20.00th=[11731], 00:13:35.030 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:13:35.030 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:13:35.030 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15139], 99.95th=[15139], 00:13:35.030 | 99.99th=[15664] 00:13:35.030 write: IOPS=5306, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1002msec); 0 zone resets 00:13:35.030 slat (usec): min=4, max=2955, avg=87.34, stdev=369.85 00:13:35.030 clat (usec): min=1924, max=15378, avg=11817.82, stdev=1337.29 00:13:35.030 lat (usec): min=1940, max=15585, avg=11905.16, stdev=1307.09 00:13:35.030 clat percentiles (usec): 00:13:35.030 | 1.00th=[ 5211], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11207], 00:13:35.030 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:13:35.030 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13173], 95.00th=[13435], 00:13:35.030 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14746], 99.95th=[15139], 00:13:35.030 | 99.99th=[15401] 00:13:35.030 bw ( KiB/s): min=20600, max=20920, per=28.23%, avg=20760.00, stdev=226.27, samples=2 00:13:35.030 iops : min= 5150, max= 5230, avg=5190.00, stdev=56.57, samples=2 00:13:35.030 lat (msec) : 2=0.04%, 4=0.21%, 10=3.99%, 20=95.77% 00:13:35.030 cpu : usr=7.19%, sys=9.19%, ctx=610, majf=0, minf=13 00:13:35.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:35.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:35.030 issued rwts: total=5120,5317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:35.030 job2: (groupid=0, jobs=1): err= 0: pid=2269571: Tue Jul 16 00:17:26 2024 00:13:35.030 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:13:35.030 slat (usec): min=3, max=12784, avg=116.36, stdev=847.42 00:13:35.030 clat (usec): min=5032, max=26616, avg=14425.90, stdev=3329.86 00:13:35.030 lat (usec): min=5038, max=31587, avg=14542.26, stdev=3385.98 00:13:35.030 clat percentiles (usec): 00:13:35.030 | 1.00th=[ 8291], 5.00th=[10290], 10.00th=[10683], 20.00th=[11994], 00:13:35.030 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[14353], 00:13:35.030 | 70.00th=[15008], 80.00th=[15795], 90.00th=[19268], 95.00th=[21890], 00:13:35.030 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:13:35.030 | 99.99th=[26608] 00:13:35.030 write: IOPS=4983, BW=19.5MiB/s (20.4MB/s)(19.6MiB/1008msec); 0 zone resets 00:13:35.030 slat (usec): min=4, max=11519, avg=83.54, stdev=467.06 00:13:35.030 clat (usec): min=1278, max=26164, avg=12208.79, stdev=3275.79 00:13:35.030 lat (usec): min=1290, max=26190, avg=12292.34, stdev=3291.44 00:13:35.030 clat percentiles (usec): 00:13:35.030 | 1.00th=[ 4228], 5.00th=[ 6390], 10.00th=[ 7701], 20.00th=[ 8848], 00:13:35.030 | 30.00th=[11076], 40.00th=[12387], 50.00th=[12911], 60.00th=[13566], 00:13:35.030 | 70.00th=[13829], 80.00th=[14615], 90.00th=[15926], 95.00th=[17171], 00:13:35.030 | 99.00th=[19268], 99.50th=[19268], 99.90th=[25560], 99.95th=[26084], 00:13:35.030 | 99.99th=[26084] 00:13:35.030 bw ( KiB/s): min=18688, max=20480, per=26.63%, avg=19584.00, stdev=1267.14, samples=2 00:13:35.030 iops : min= 4672, max= 5120, avg=4896.00, stdev=316.78, samples=2 00:13:35.030 lat (msec) : 2=0.02%, 4=0.42%, 10=15.87%, 20=79.65%, 50=4.05% 00:13:35.030 cpu : usr=5.56%, sys=8.64%, ctx=502, majf=0, minf=15 00:13:35.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:35.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:35.030 issued rwts: total=4608,5023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:35.030 job3: (groupid=0, jobs=1): err= 0: pid=2269572: Tue Jul 16 00:17:26 2024 00:13:35.030 read: IOPS=4044, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1004msec) 00:13:35.030 slat (usec): min=2, max=12708, avg=124.40, stdev=745.06 00:13:35.030 clat (usec): min=2007, max=31721, avg=15682.88, stdev=3994.38 00:13:35.030 lat (usec): min=5231, max=31738, avg=15807.27, stdev=4032.12 00:13:35.030 clat percentiles (usec): 00:13:35.030 | 1.00th=[ 7439], 5.00th=[10683], 10.00th=[11994], 20.00th=[13042], 00:13:35.030 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14746], 60.00th=[15270], 00:13:35.030 | 70.00th=[16909], 80.00th=[18744], 90.00th=[20579], 95.00th=[22938], 00:13:35.030 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:13:35.030 | 99.99th=[31851] 00:13:35.030 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:13:35.030 slat (usec): min=3, max=11017, avg=111.97, stdev=614.17 00:13:35.030 clat (usec): min=6401, max=31589, avg=15390.01, stdev=3681.85 00:13:35.030 lat (usec): min=6414, max=31602, avg=15501.98, stdev=3701.17 00:13:35.031 clat percentiles (usec): 00:13:35.031 | 1.00th=[ 8291], 5.00th=[10552], 10.00th=[12387], 20.00th=[13304], 00:13:35.031 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:13:35.031 | 70.00th=[16909], 80.00th=[17433], 90.00th=[20579], 95.00th=[23200], 00:13:35.031 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:13:35.031 | 99.99th=[31589] 00:13:35.031 bw ( KiB/s): min=16384, max=16384, per=22.28%, avg=16384.00, stdev= 0.00, samples=2 00:13:35.031 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:35.031 lat (msec) : 4=0.01%, 10=3.00%, 20=84.61%, 50=12.37% 00:13:35.031 cpu : usr=4.79%, sys=7.28%, ctx=418, majf=0, minf=13 00:13:35.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:35.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:35.031 issued rwts: total=4061,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:35.031 00:13:35.031 Run status group 0 (all jobs): 00:13:35.031 READ: bw=67.7MiB/s (71.0MB/s), 14.3MiB/s-20.0MiB/s (15.0MB/s-20.9MB/s), io=68.3MiB (71.6MB), run=1002-1008msec 00:13:35.031 WRITE: bw=71.8MiB/s (75.3MB/s), 15.9MiB/s-20.7MiB/s (16.7MB/s-21.7MB/s), io=72.4MiB (75.9MB), run=1002-1008msec 00:13:35.031 00:13:35.031 Disk stats (read/write): 00:13:35.031 nvme0n1: ios=3585/3584, merge=0/0, ticks=16253/17220, in_queue=33473, util=87.37% 00:13:35.031 nvme0n2: ios=4223/4608, merge=0/0, ticks=13401/12772, in_queue=26173, util=100.00% 00:13:35.031 nvme0n3: ios=3789/4096, merge=0/0, ticks=54918/50054, in_queue=104972, util=99.27% 00:13:35.031 nvme0n4: ios=3556/3584, merge=0/0, ticks=27597/25026, in_queue=52623, util=99.37% 00:13:35.031 00:17:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:35.031 00:17:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2269711 00:13:35.031 00:17:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:35.031 00:17:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:35.031 [global] 00:13:35.031 thread=1 00:13:35.031 invalidate=1 00:13:35.031 rw=read 00:13:35.031 time_based=1 00:13:35.031 runtime=10 00:13:35.031 ioengine=libaio 00:13:35.031 direct=1 00:13:35.031 bs=4096 00:13:35.031 iodepth=1 00:13:35.031 norandommap=1 00:13:35.031 numjobs=1 00:13:35.031 00:13:35.031 [job0] 00:13:35.031 filename=/dev/nvme0n1 00:13:35.031 [job1] 00:13:35.031 filename=/dev/nvme0n2 00:13:35.031 [job2] 00:13:35.031 filename=/dev/nvme0n3 00:13:35.031 [job3] 00:13:35.031 filename=/dev/nvme0n4 00:13:35.031 Could not set queue depth (nvme0n1) 00:13:35.031 Could not set queue depth (nvme0n2) 00:13:35.031 Could not set queue depth (nvme0n3) 00:13:35.031 Could not set queue depth (nvme0n4) 00:13:35.031 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.031 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.031 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.031 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.031 fio-3.35 00:13:35.031 Starting 4 threads 00:13:38.317 00:17:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:38.317 00:17:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:38.317 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1835008, buflen=4096 00:13:38.317 fio: pid=2269811, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:38.317 00:17:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.317 00:17:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:38.317 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=356352, buflen=4096 00:13:38.317 fio: pid=2269810, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:38.575 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.575 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:38.575 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1978368, buflen=4096 00:13:38.575 fio: pid=2269806, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:38.833 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=18018304, buflen=4096 00:13:38.833 fio: pid=2269807, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:38.833 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.833 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:39.092 00:13:39.092 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2269806: Tue Jul 16 00:17:30 2024 00:13:39.092 read: IOPS=139, BW=558KiB/s (571kB/s)(1932KiB/3465msec) 00:13:39.092 slat (usec): min=6, max=15506, avg=99.35, stdev=1078.04 00:13:39.092 clat (usec): min=341, max=42243, avg=7022.35, stdev=14959.02 00:13:39.092 lat (usec): min=348, max=55981, avg=7121.88, stdev=15047.89 00:13:39.092 clat percentiles (usec): 00:13:39.092 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 383], 20.00th=[ 400], 00:13:39.092 | 30.00th=[ 420], 40.00th=[ 441], 50.00th=[ 453], 60.00th=[ 478], 00:13:39.092 | 70.00th=[ 502], 80.00th=[ 586], 90.00th=[41157], 95.00th=[41157], 00:13:39.092 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:39.092 | 99.99th=[42206] 00:13:39.092 bw ( KiB/s): min= 96, max= 1280, per=7.41%, avg=432.00, stdev=520.06, samples=6 00:13:39.092 iops : min= 24, max= 320, avg=108.00, stdev=130.02, samples=6 00:13:39.092 lat (usec) : 500=69.42%, 750=13.02%, 1000=0.41% 00:13:39.092 lat (msec) : 2=0.21%, 4=0.62%, 50=16.12% 00:13:39.092 cpu : usr=0.06%, sys=0.35%, ctx=488, majf=0, minf=1 00:13:39.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.092 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.092 issued rwts: total=484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.092 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2269807: Tue Jul 16 00:17:30 2024 00:13:39.092 read: IOPS=1184, BW=4736KiB/s (4850kB/s)(17.2MiB/3715msec) 00:13:39.092 slat (usec): min=4, max=29548, avg=30.34, stdev=548.97 00:13:39.092 clat (usec): min=290, max=42056, avg=808.64, stdev=4026.04 00:13:39.092 lat (usec): min=296, max=42070, avg=838.99, stdev=4063.62 00:13:39.092 clat percentiles (usec): 00:13:39.092 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 334], 20.00th=[ 367], 00:13:39.092 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 420], 00:13:39.092 | 70.00th=[ 433], 80.00th=[ 453], 90.00th=[ 478], 95.00th=[ 498], 00:13:39.092 | 99.00th=[ 5407], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:39.092 | 99.99th=[42206] 00:13:39.092 bw ( KiB/s): min= 96, max= 9960, per=78.75%, avg=4593.57, stdev=4274.59, samples=7 00:13:39.092 iops : min= 24, max= 2490, avg=1148.29, stdev=1068.56, samples=7 00:13:39.092 lat (usec) : 500=95.23%, 750=3.61%, 1000=0.05% 00:13:39.092 lat (msec) : 2=0.05%, 4=0.05%, 10=0.02%, 50=0.98% 00:13:39.092 cpu : usr=0.65%, sys=2.07%, ctx=4406, majf=0, minf=1 00:13:39.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.092 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.092 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.092 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2269810: Tue Jul 16 00:17:30 2024 00:13:39.092 read: IOPS=27, BW=109KiB/s (111kB/s)(348KiB/3199msec) 00:13:39.092 slat (nsec): min=8312, max=39307, avg=23303.22, stdev=9783.44 00:13:39.092 clat (usec): min=442, max=42989, avg=36482.65, stdev=13026.57 00:13:39.092 lat (usec): min=452, max=43009, avg=36506.06, stdev=13029.46 00:13:39.092 clat percentiles (usec): 00:13:39.092 | 1.00th=[ 445], 5.00th=[ 529], 10.00th=[ 701], 20.00th=[41157], 00:13:39.092 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:39.092 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:39.092 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:39.092 | 99.99th=[42730] 00:13:39.092 bw ( KiB/s): min= 96, max= 144, per=1.80%, avg=105.33, stdev=19.21, samples=6 00:13:39.092 iops : min= 24, max= 36, avg=26.33, stdev= 4.80, samples=6 00:13:39.092 lat (usec) : 500=4.55%, 750=6.82% 00:13:39.092 lat (msec) : 50=87.50% 00:13:39.092 cpu : usr=0.00%, sys=0.13%, ctx=89, majf=0, minf=1 00:13:39.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.092 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.092 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.092 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2269811: Tue Jul 16 00:17:30 2024 00:13:39.092 read: IOPS=153, BW=611KiB/s (626kB/s)(1792KiB/2931msec) 00:13:39.092 slat (nsec): min=6948, max=84174, avg=11251.56, stdev=8480.04 00:13:39.092 clat (usec): min=333, max=42058, avg=6478.46, stdev=14542.00 00:13:39.092 lat (usec): min=341, max=42078, avg=6489.65, stdev=14548.05 00:13:39.092 clat percentiles (usec): 00:13:39.092 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 363], 00:13:39.092 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:13:39.092 | 70.00th=[ 388], 80.00th=[ 461], 90.00th=[41157], 95.00th=[41157], 00:13:39.092 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:39.092 | 99.99th=[42206] 00:13:39.092 bw ( KiB/s): min= 96, max= 112, per=1.78%, avg=104.00, stdev= 5.66, samples=5 00:13:39.092 iops : min= 24, max= 28, avg=26.00, stdev= 1.41, samples=5 00:13:39.092 lat (usec) : 500=81.07%, 750=3.56% 00:13:39.092 lat (msec) : 2=0.22%, 50=14.92% 00:13:39.092 cpu : usr=0.10%, sys=0.27%, ctx=449, majf=0, minf=1 00:13:39.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.093 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.093 issued rwts: total=449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.093 00:13:39.093 Run status group 0 (all jobs): 00:13:39.093 READ: bw=5833KiB/s (5973kB/s), 109KiB/s-4736KiB/s (111kB/s-4850kB/s), io=21.2MiB (22.2MB), run=2931-3715msec 00:13:39.093 00:13:39.093 Disk stats (read/write): 00:13:39.093 nvme0n1: ios=454/0, merge=0/0, ticks=3292/0, in_queue=3292, util=94.65% 00:13:39.093 nvme0n2: ios=4194/0, merge=0/0, ticks=3397/0, in_queue=3397, util=94.80% 00:13:39.093 nvme0n3: ios=134/0, merge=0/0, ticks=4362/0, in_queue=4362, util=99.66% 00:13:39.093 nvme0n4: ios=392/0, merge=0/0, ticks=2838/0, in_queue=2838, util=96.74% 00:13:39.093 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.093 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:39.351 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.351 00:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:39.610 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.610 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:39.868 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.868 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:40.127 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:40.127 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2269711 00:13:40.127 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:40.127 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:40.386 nvmf hotplug test: fio failed as expected 00:13:40.386 00:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.645 rmmod nvme_tcp 00:13:40.645 rmmod nvme_fabrics 00:13:40.645 rmmod nvme_keyring 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2267679 ']' 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2267679 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2267679 ']' 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2267679 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2267679 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2267679' 00:13:40.645 killing process with pid 2267679 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2267679 00:13:40.645 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2267679 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.902 00:17:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.483 00:17:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.483 00:13:43.483 real 0m24.095s 00:13:43.483 user 1m24.590s 00:13:43.483 sys 0m6.566s 00:13:43.483 00:17:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.483 00:17:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.483 ************************************ 00:13:43.483 END TEST nvmf_fio_target 00:13:43.483 ************************************ 00:13:43.483 00:17:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:43.483 00:17:34 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:43.483 00:17:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.483 00:17:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.483 00:17:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.483 ************************************ 00:13:43.483 START TEST nvmf_bdevio 00:13:43.483 ************************************ 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:43.483 * Looking for test storage... 00:13:43.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.483 00:17:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:45.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:45.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.444 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:45.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:45.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:13:45.445 00:13:45.445 --- 10.0.0.2 ping statistics --- 00:13:45.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.445 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:13:45.445 00:13:45.445 --- 10.0.0.1 ping statistics --- 00:13:45.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.445 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2272426 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2272426 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2272426 ']' 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.445 00:17:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.445 [2024-07-16 00:17:36.810715] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:13:45.445 [2024-07-16 00:17:36.810803] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.445 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.445 [2024-07-16 00:17:36.890885] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.445 [2024-07-16 00:17:37.027919] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.445 [2024-07-16 00:17:37.027990] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.445 [2024-07-16 00:17:37.028028] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.445 [2024-07-16 00:17:37.028050] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.445 [2024-07-16 00:17:37.028070] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.445 [2024-07-16 00:17:37.028197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:45.445 [2024-07-16 00:17:37.028264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:45.445 [2024-07-16 00:17:37.028296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:45.445 [2024-07-16 00:17:37.028303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.704 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.704 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:45.704 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.704 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.705 [2024-07-16 00:17:37.198656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.705 Malloc0 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.705 [2024-07-16 00:17:37.250110] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:45.705 { 00:13:45.705 "params": { 00:13:45.705 "name": "Nvme$subsystem", 00:13:45.705 "trtype": "$TEST_TRANSPORT", 00:13:45.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.705 "adrfam": "ipv4", 00:13:45.705 "trsvcid": "$NVMF_PORT", 00:13:45.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.705 "hdgst": ${hdgst:-false}, 00:13:45.705 "ddgst": ${ddgst:-false} 00:13:45.705 }, 00:13:45.705 "method": "bdev_nvme_attach_controller" 00:13:45.705 } 00:13:45.705 EOF 00:13:45.705 )") 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:45.705 00:17:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:45.705 "params": { 00:13:45.705 "name": "Nvme1", 00:13:45.705 "trtype": "tcp", 00:13:45.705 "traddr": "10.0.0.2", 00:13:45.705 "adrfam": "ipv4", 00:13:45.705 "trsvcid": "4420", 00:13:45.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.705 "hdgst": false, 00:13:45.705 "ddgst": false 00:13:45.705 }, 00:13:45.705 "method": "bdev_nvme_attach_controller" 00:13:45.705 }' 00:13:45.705 [2024-07-16 00:17:37.294671] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:13:45.705 [2024-07-16 00:17:37.294751] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272575 ] 00:13:45.963 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.963 [2024-07-16 00:17:37.355502] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.963 [2024-07-16 00:17:37.467643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.963 [2024-07-16 00:17:37.467696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.963 [2024-07-16 00:17:37.467700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.222 I/O targets: 00:13:46.222 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:46.222 00:13:46.222 00:13:46.222 CUnit - A unit testing framework for C - Version 2.1-3 00:13:46.222 http://cunit.sourceforge.net/ 00:13:46.222 00:13:46.222 00:13:46.222 Suite: bdevio tests on: Nvme1n1 00:13:46.222 Test: blockdev write read block ...passed 00:13:46.222 Test: blockdev write zeroes read block ...passed 00:13:46.222 Test: blockdev write zeroes read no split ...passed 00:13:46.222 Test: blockdev write zeroes read split ...passed 00:13:46.481 Test: blockdev write zeroes read split partial ...passed 00:13:46.481 Test: blockdev reset ...[2024-07-16 00:17:37.851808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:46.481 [2024-07-16 00:17:37.851935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22976d0 (9): Bad file descriptor 00:13:46.481 [2024-07-16 00:17:37.912193] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.481 passed 00:13:46.481 Test: blockdev write read 8 blocks ...passed 00:13:46.481 Test: blockdev write read size > 128k ...passed 00:13:46.481 Test: blockdev write read invalid size ...passed 00:13:46.481 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:46.481 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:46.481 Test: blockdev write read max offset ...passed 00:13:46.481 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.481 Test: blockdev writev readv 8 blocks ...passed 00:13:46.481 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.481 Test: blockdev writev readv block ...passed 00:13:46.481 Test: blockdev writev readv size > 128k ...passed 00:13:46.481 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.481 Test: blockdev comparev and writev ...[2024-07-16 00:17:38.092044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.092081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:46.481 [2024-07-16 00:17:38.092107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.092125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:46.481 [2024-07-16 00:17:38.092578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.092603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:46.481 [2024-07-16 00:17:38.092626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.092644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:46.481 [2024-07-16 00:17:38.093062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.093087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:46.481 [2024-07-16 00:17:38.093110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.093127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:46.481 [2024-07-16 00:17:38.093560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.093585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:46.481 [2024-07-16 00:17:38.093607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.481 [2024-07-16 00:17:38.093622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:46.740 passed 00:13:46.740 Test: blockdev nvme passthru rw ...passed 00:13:46.740 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:17:38.178333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.740 [2024-07-16 00:17:38.178361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:46.740 [2024-07-16 00:17:38.178620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.740 [2024-07-16 00:17:38.178644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:46.740 [2024-07-16 00:17:38.178875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.740 [2024-07-16 00:17:38.178906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:46.740 [2024-07-16 00:17:38.179166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.740 [2024-07-16 00:17:38.179190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:46.740 passed 00:13:46.740 Test: blockdev nvme admin passthru ...passed 00:13:46.740 Test: blockdev copy ...passed 00:13:46.740 00:13:46.740 Run Summary: Type Total Ran Passed Failed Inactive 00:13:46.740 suites 1 1 n/a 0 0 00:13:46.740 tests 23 23 23 0 0 00:13:46.740 asserts 152 152 152 0 n/a 00:13:46.740 00:13:46.740 Elapsed time = 1.186 seconds 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.000 rmmod nvme_tcp 00:13:47.000 rmmod nvme_fabrics 00:13:47.000 rmmod nvme_keyring 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2272426 ']' 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2272426 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2272426 ']' 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2272426 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2272426 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2272426' 00:13:47.000 killing process with pid 2272426 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2272426 00:13:47.000 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2272426 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.259 00:17:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.797 00:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.797 00:13:49.797 real 0m6.331s 00:13:49.797 user 0m10.079s 00:13:49.797 sys 0m2.076s 00:13:49.797 00:17:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.797 00:17:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:49.797 ************************************ 00:13:49.797 END TEST nvmf_bdevio 00:13:49.797 ************************************ 00:13:49.797 00:17:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.797 00:17:40 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:49.797 00:17:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.797 00:17:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.797 00:17:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.797 ************************************ 00:13:49.797 START TEST nvmf_auth_target 00:13:49.797 ************************************ 00:13:49.797 00:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:49.797 * Looking for test storage... 00:13:49.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.797 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.798 00:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:51.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:51.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.702 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:51.703 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:51.703 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.703 00:17:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:13:51.703 00:13:51.703 --- 10.0.0.2 ping statistics --- 00:13:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.703 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:13:51.703 00:13:51.703 --- 10.0.0.1 ping statistics --- 00:13:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.703 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2274637 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2274637 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2274637 ']' 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.703 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2274792 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=47df059217ee6bcdd95370a3b435ecda210baa5dbba9bb43 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PWA 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 47df059217ee6bcdd95370a3b435ecda210baa5dbba9bb43 0 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 47df059217ee6bcdd95370a3b435ecda210baa5dbba9bb43 0 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=47df059217ee6bcdd95370a3b435ecda210baa5dbba9bb43 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PWA 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PWA 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.PWA 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf4b32f88266e2b5df1d9ebfd571b62be010ad55079e464dbd62855faf7794fd 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dIQ 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf4b32f88266e2b5df1d9ebfd571b62be010ad55079e464dbd62855faf7794fd 3 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf4b32f88266e2b5df1d9ebfd571b62be010ad55079e464dbd62855faf7794fd 3 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf4b32f88266e2b5df1d9ebfd571b62be010ad55079e464dbd62855faf7794fd 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:52.637 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dIQ 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dIQ 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.dIQ 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:52.895 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8ec17a4ca5d59cdc607a89a5dc33ca60 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uT4 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8ec17a4ca5d59cdc607a89a5dc33ca60 1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8ec17a4ca5d59cdc607a89a5dc33ca60 1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8ec17a4ca5d59cdc607a89a5dc33ca60 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uT4 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uT4 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.uT4 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6966e0a90a2c6884e273c49157b10b08744938cdba27796b 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5e5 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6966e0a90a2c6884e273c49157b10b08744938cdba27796b 2 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6966e0a90a2c6884e273c49157b10b08744938cdba27796b 2 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6966e0a90a2c6884e273c49157b10b08744938cdba27796b 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5e5 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5e5 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.5e5 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1fe7b5e33626f8990c421385a0fd9052cf1e096b1ee52a72 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8c0 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1fe7b5e33626f8990c421385a0fd9052cf1e096b1ee52a72 2 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1fe7b5e33626f8990c421385a0fd9052cf1e096b1ee52a72 2 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1fe7b5e33626f8990c421385a0fd9052cf1e096b1ee52a72 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8c0 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8c0 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.8c0 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c8a5d910785b051599fcffe99292f38f 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fDd 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c8a5d910785b051599fcffe99292f38f 1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c8a5d910785b051599fcffe99292f38f 1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c8a5d910785b051599fcffe99292f38f 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fDd 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fDd 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.fDd 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19ac5a8ad176d12827b3dcac554cc2c30fb0439780efb117dd06e3dea05b72b7 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Dzr 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19ac5a8ad176d12827b3dcac554cc2c30fb0439780efb117dd06e3dea05b72b7 3 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19ac5a8ad176d12827b3dcac554cc2c30fb0439780efb117dd06e3dea05b72b7 3 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19ac5a8ad176d12827b3dcac554cc2c30fb0439780efb117dd06e3dea05b72b7 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:52.896 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Dzr 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Dzr 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Dzr 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2274637 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2274637 ']' 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2274792 /var/tmp/host.sock 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2274792 ']' 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:53.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.155 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.412 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.412 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:53.412 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:53.412 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.412 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PWA 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.PWA 00:13:53.671 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.PWA 00:13:53.932 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.dIQ ]] 00:13:53.932 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dIQ 00:13:53.932 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.932 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.932 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.932 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dIQ 00:13:53.932 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dIQ 00:13:54.190 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:54.190 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uT4 00:13:54.190 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.190 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.190 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.190 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uT4 00:13:54.190 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uT4 00:13:54.448 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.5e5 ]] 00:13:54.448 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5e5 00:13:54.448 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.448 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.448 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.448 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5e5 00:13:54.448 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5e5 00:13:54.705 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:54.705 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8c0 00:13:54.705 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.705 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.705 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.705 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.8c0 00:13:54.705 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.8c0 00:13:54.963 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.fDd ]] 00:13:54.963 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fDd 00:13:54.963 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.963 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.963 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.963 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fDd 00:13:54.963 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fDd 00:13:55.221 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:55.221 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Dzr 00:13:55.221 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.221 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.221 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.221 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Dzr 00:13:55.221 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Dzr 00:13:55.478 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:55.478 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:55.478 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.478 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.478 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.478 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.735 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.300 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.300 { 00:13:56.300 "cntlid": 1, 00:13:56.300 "qid": 0, 00:13:56.300 "state": "enabled", 00:13:56.300 "thread": "nvmf_tgt_poll_group_000", 00:13:56.300 "listen_address": { 00:13:56.300 "trtype": "TCP", 00:13:56.300 "adrfam": "IPv4", 00:13:56.300 "traddr": "10.0.0.2", 00:13:56.300 "trsvcid": "4420" 00:13:56.300 }, 00:13:56.300 "peer_address": { 00:13:56.300 "trtype": "TCP", 00:13:56.300 "adrfam": "IPv4", 00:13:56.300 "traddr": "10.0.0.1", 00:13:56.300 "trsvcid": "59884" 00:13:56.300 }, 00:13:56.300 "auth": { 00:13:56.300 "state": "completed", 00:13:56.300 "digest": "sha256", 00:13:56.300 "dhgroup": "null" 00:13:56.300 } 00:13:56.300 } 00:13:56.300 ]' 00:13:56.300 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.558 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.558 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.558 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:56.558 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.558 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.558 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.558 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.815 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:57.748 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.006 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.264 00:13:58.264 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.264 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.264 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.521 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.521 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.521 00:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.521 00:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 00:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.521 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.521 { 00:13:58.521 "cntlid": 3, 00:13:58.521 "qid": 0, 00:13:58.521 "state": "enabled", 00:13:58.521 "thread": "nvmf_tgt_poll_group_000", 00:13:58.521 "listen_address": { 00:13:58.521 "trtype": "TCP", 00:13:58.521 "adrfam": "IPv4", 00:13:58.521 "traddr": "10.0.0.2", 00:13:58.521 "trsvcid": "4420" 00:13:58.521 }, 00:13:58.521 "peer_address": { 00:13:58.521 "trtype": "TCP", 00:13:58.521 "adrfam": "IPv4", 00:13:58.521 "traddr": "10.0.0.1", 00:13:58.521 "trsvcid": "59900" 00:13:58.521 }, 00:13:58.521 "auth": { 00:13:58.521 "state": "completed", 00:13:58.521 "digest": "sha256", 00:13:58.521 "dhgroup": "null" 00:13:58.521 } 00:13:58.521 } 00:13:58.521 ]' 00:13:58.521 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.778 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.778 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.779 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:58.779 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.779 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.779 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.779 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.039 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:59.996 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.252 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.508 00:14:00.508 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.508 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.508 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.765 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.766 { 00:14:00.766 "cntlid": 5, 00:14:00.766 "qid": 0, 00:14:00.766 "state": "enabled", 00:14:00.766 "thread": "nvmf_tgt_poll_group_000", 00:14:00.766 "listen_address": { 00:14:00.766 "trtype": "TCP", 00:14:00.766 "adrfam": "IPv4", 00:14:00.766 "traddr": "10.0.0.2", 00:14:00.766 "trsvcid": "4420" 00:14:00.766 }, 00:14:00.766 "peer_address": { 00:14:00.766 "trtype": "TCP", 00:14:00.766 "adrfam": "IPv4", 00:14:00.766 "traddr": "10.0.0.1", 00:14:00.766 "trsvcid": "51158" 00:14:00.766 }, 00:14:00.766 "auth": { 00:14:00.766 "state": "completed", 00:14:00.766 "digest": "sha256", 00:14:00.766 "dhgroup": "null" 00:14:00.766 } 00:14:00.766 } 00:14:00.766 ]' 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.766 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.022 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:01.022 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.022 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.022 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.022 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.278 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:14:02.204 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.204 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.204 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.204 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.204 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.204 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.204 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.205 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.462 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.720 00:14:02.720 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.720 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.720 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.977 { 00:14:02.977 "cntlid": 7, 00:14:02.977 "qid": 0, 00:14:02.977 "state": "enabled", 00:14:02.977 "thread": "nvmf_tgt_poll_group_000", 00:14:02.977 "listen_address": { 00:14:02.977 "trtype": "TCP", 00:14:02.977 "adrfam": "IPv4", 00:14:02.977 "traddr": "10.0.0.2", 00:14:02.977 "trsvcid": "4420" 00:14:02.977 }, 00:14:02.977 "peer_address": { 00:14:02.977 "trtype": "TCP", 00:14:02.977 "adrfam": "IPv4", 00:14:02.977 "traddr": "10.0.0.1", 00:14:02.977 "trsvcid": "51180" 00:14:02.977 }, 00:14:02.977 "auth": { 00:14:02.977 "state": "completed", 00:14:02.977 "digest": "sha256", 00:14:02.977 "dhgroup": "null" 00:14:02.977 } 00:14:02.977 } 00:14:02.977 ]' 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:02.977 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.234 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.234 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.234 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.492 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:04.425 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.683 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.941 00:14:04.941 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.941 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.941 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.199 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.199 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.199 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.199 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.456 { 00:14:05.456 "cntlid": 9, 00:14:05.456 "qid": 0, 00:14:05.456 "state": "enabled", 00:14:05.456 "thread": "nvmf_tgt_poll_group_000", 00:14:05.456 "listen_address": { 00:14:05.456 "trtype": "TCP", 00:14:05.456 "adrfam": "IPv4", 00:14:05.456 "traddr": "10.0.0.2", 00:14:05.456 "trsvcid": "4420" 00:14:05.456 }, 00:14:05.456 "peer_address": { 00:14:05.456 "trtype": "TCP", 00:14:05.456 "adrfam": "IPv4", 00:14:05.456 "traddr": "10.0.0.1", 00:14:05.456 "trsvcid": "51198" 00:14:05.456 }, 00:14:05.456 "auth": { 00:14:05.456 "state": "completed", 00:14:05.456 "digest": "sha256", 00:14:05.456 "dhgroup": "ffdhe2048" 00:14:05.456 } 00:14:05.456 } 00:14:05.456 ]' 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.456 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.714 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.716 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.973 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.231 00:14:07.231 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.231 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.231 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.488 { 00:14:07.488 "cntlid": 11, 00:14:07.488 "qid": 0, 00:14:07.488 "state": "enabled", 00:14:07.488 "thread": "nvmf_tgt_poll_group_000", 00:14:07.488 "listen_address": { 00:14:07.488 "trtype": "TCP", 00:14:07.488 "adrfam": "IPv4", 00:14:07.488 "traddr": "10.0.0.2", 00:14:07.488 "trsvcid": "4420" 00:14:07.488 }, 00:14:07.488 "peer_address": { 00:14:07.488 "trtype": "TCP", 00:14:07.488 "adrfam": "IPv4", 00:14:07.488 "traddr": "10.0.0.1", 00:14:07.488 "trsvcid": "51236" 00:14:07.488 }, 00:14:07.488 "auth": { 00:14:07.488 "state": "completed", 00:14:07.488 "digest": "sha256", 00:14:07.488 "dhgroup": "ffdhe2048" 00:14:07.488 } 00:14:07.488 } 00:14:07.488 ]' 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.488 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.746 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.746 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.746 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.746 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.746 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.004 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:08.935 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.192 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.450 00:14:09.450 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.450 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.450 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.708 { 00:14:09.708 "cntlid": 13, 00:14:09.708 "qid": 0, 00:14:09.708 "state": "enabled", 00:14:09.708 "thread": "nvmf_tgt_poll_group_000", 00:14:09.708 "listen_address": { 00:14:09.708 "trtype": "TCP", 00:14:09.708 "adrfam": "IPv4", 00:14:09.708 "traddr": "10.0.0.2", 00:14:09.708 "trsvcid": "4420" 00:14:09.708 }, 00:14:09.708 "peer_address": { 00:14:09.708 "trtype": "TCP", 00:14:09.708 "adrfam": "IPv4", 00:14:09.708 "traddr": "10.0.0.1", 00:14:09.708 "trsvcid": "51264" 00:14:09.708 }, 00:14:09.708 "auth": { 00:14:09.708 "state": "completed", 00:14:09.708 "digest": "sha256", 00:14:09.708 "dhgroup": "ffdhe2048" 00:14:09.708 } 00:14:09.708 } 00:14:09.708 ]' 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.708 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.965 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.965 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.965 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.965 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.965 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.223 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.155 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.413 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.671 00:14:11.671 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.671 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.671 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.932 { 00:14:11.932 "cntlid": 15, 00:14:11.932 "qid": 0, 00:14:11.932 "state": "enabled", 00:14:11.932 "thread": "nvmf_tgt_poll_group_000", 00:14:11.932 "listen_address": { 00:14:11.932 "trtype": "TCP", 00:14:11.932 "adrfam": "IPv4", 00:14:11.932 "traddr": "10.0.0.2", 00:14:11.932 "trsvcid": "4420" 00:14:11.932 }, 00:14:11.932 "peer_address": { 00:14:11.932 "trtype": "TCP", 00:14:11.932 "adrfam": "IPv4", 00:14:11.932 "traddr": "10.0.0.1", 00:14:11.932 "trsvcid": "47114" 00:14:11.932 }, 00:14:11.932 "auth": { 00:14:11.932 "state": "completed", 00:14:11.932 "digest": "sha256", 00:14:11.932 "dhgroup": "ffdhe2048" 00:14:11.932 } 00:14:11.932 } 00:14:11.932 ]' 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.932 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.188 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.188 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.188 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.449 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:13.386 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.644 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.902 00:14:13.902 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.902 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.902 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.160 { 00:14:14.160 "cntlid": 17, 00:14:14.160 "qid": 0, 00:14:14.160 "state": "enabled", 00:14:14.160 "thread": "nvmf_tgt_poll_group_000", 00:14:14.160 "listen_address": { 00:14:14.160 "trtype": "TCP", 00:14:14.160 "adrfam": "IPv4", 00:14:14.160 "traddr": "10.0.0.2", 00:14:14.160 "trsvcid": "4420" 00:14:14.160 }, 00:14:14.160 "peer_address": { 00:14:14.160 "trtype": "TCP", 00:14:14.160 "adrfam": "IPv4", 00:14:14.160 "traddr": "10.0.0.1", 00:14:14.160 "trsvcid": "47140" 00:14:14.160 }, 00:14:14.160 "auth": { 00:14:14.160 "state": "completed", 00:14:14.160 "digest": "sha256", 00:14:14.160 "dhgroup": "ffdhe3072" 00:14:14.160 } 00:14:14.160 } 00:14:14.160 ]' 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.160 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.418 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:14:15.353 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.613 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.613 00:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.613 00:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.613 00:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.613 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.613 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.613 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.872 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.130 00:14:16.130 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.130 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.130 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.389 { 00:14:16.389 "cntlid": 19, 00:14:16.389 "qid": 0, 00:14:16.389 "state": "enabled", 00:14:16.389 "thread": "nvmf_tgt_poll_group_000", 00:14:16.389 "listen_address": { 00:14:16.389 "trtype": "TCP", 00:14:16.389 "adrfam": "IPv4", 00:14:16.389 "traddr": "10.0.0.2", 00:14:16.389 "trsvcid": "4420" 00:14:16.389 }, 00:14:16.389 "peer_address": { 00:14:16.389 "trtype": "TCP", 00:14:16.389 "adrfam": "IPv4", 00:14:16.389 "traddr": "10.0.0.1", 00:14:16.389 "trsvcid": "47178" 00:14:16.389 }, 00:14:16.389 "auth": { 00:14:16.389 "state": "completed", 00:14:16.389 "digest": "sha256", 00:14:16.389 "dhgroup": "ffdhe3072" 00:14:16.389 } 00:14:16.389 } 00:14:16.389 ]' 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.389 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.648 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.585 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.844 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.414 00:14:18.414 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.414 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.414 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.673 { 00:14:18.673 "cntlid": 21, 00:14:18.673 "qid": 0, 00:14:18.673 "state": "enabled", 00:14:18.673 "thread": "nvmf_tgt_poll_group_000", 00:14:18.673 "listen_address": { 00:14:18.673 "trtype": "TCP", 00:14:18.673 "adrfam": "IPv4", 00:14:18.673 "traddr": "10.0.0.2", 00:14:18.673 "trsvcid": "4420" 00:14:18.673 }, 00:14:18.673 "peer_address": { 00:14:18.673 "trtype": "TCP", 00:14:18.673 "adrfam": "IPv4", 00:14:18.673 "traddr": "10.0.0.1", 00:14:18.673 "trsvcid": "47202" 00:14:18.673 }, 00:14:18.673 "auth": { 00:14:18.673 "state": "completed", 00:14:18.673 "digest": "sha256", 00:14:18.673 "dhgroup": "ffdhe3072" 00:14:18.673 } 00:14:18.673 } 00:14:18.673 ]' 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.673 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.932 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:14:19.869 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.870 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.870 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.870 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.870 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.870 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.870 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.870 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.127 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.385 00:14:20.644 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.644 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.644 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.901 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.901 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.901 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.901 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.901 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.901 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.901 { 00:14:20.901 "cntlid": 23, 00:14:20.901 "qid": 0, 00:14:20.901 "state": "enabled", 00:14:20.901 "thread": "nvmf_tgt_poll_group_000", 00:14:20.901 "listen_address": { 00:14:20.901 "trtype": "TCP", 00:14:20.901 "adrfam": "IPv4", 00:14:20.901 "traddr": "10.0.0.2", 00:14:20.901 "trsvcid": "4420" 00:14:20.901 }, 00:14:20.901 "peer_address": { 00:14:20.901 "trtype": "TCP", 00:14:20.901 "adrfam": "IPv4", 00:14:20.901 "traddr": "10.0.0.1", 00:14:20.901 "trsvcid": "51690" 00:14:20.901 }, 00:14:20.901 "auth": { 00:14:20.902 "state": "completed", 00:14:20.902 "digest": "sha256", 00:14:20.902 "dhgroup": "ffdhe3072" 00:14:20.902 } 00:14:20.902 } 00:14:20.902 ]' 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.902 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.160 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:22.093 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.350 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.918 00:14:22.918 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.918 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.918 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.175 { 00:14:23.175 "cntlid": 25, 00:14:23.175 "qid": 0, 00:14:23.175 "state": "enabled", 00:14:23.175 "thread": "nvmf_tgt_poll_group_000", 00:14:23.175 "listen_address": { 00:14:23.175 "trtype": "TCP", 00:14:23.175 "adrfam": "IPv4", 00:14:23.175 "traddr": "10.0.0.2", 00:14:23.175 "trsvcid": "4420" 00:14:23.175 }, 00:14:23.175 "peer_address": { 00:14:23.175 "trtype": "TCP", 00:14:23.175 "adrfam": "IPv4", 00:14:23.175 "traddr": "10.0.0.1", 00:14:23.175 "trsvcid": "51710" 00:14:23.175 }, 00:14:23.175 "auth": { 00:14:23.175 "state": "completed", 00:14:23.175 "digest": "sha256", 00:14:23.175 "dhgroup": "ffdhe4096" 00:14:23.175 } 00:14:23.175 } 00:14:23.175 ]' 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.175 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.433 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.370 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.938 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.939 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.939 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.939 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.197 00:14:25.197 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.197 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.197 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.455 { 00:14:25.455 "cntlid": 27, 00:14:25.455 "qid": 0, 00:14:25.455 "state": "enabled", 00:14:25.455 "thread": "nvmf_tgt_poll_group_000", 00:14:25.455 "listen_address": { 00:14:25.455 "trtype": "TCP", 00:14:25.455 "adrfam": "IPv4", 00:14:25.455 "traddr": "10.0.0.2", 00:14:25.455 "trsvcid": "4420" 00:14:25.455 }, 00:14:25.455 "peer_address": { 00:14:25.455 "trtype": "TCP", 00:14:25.455 "adrfam": "IPv4", 00:14:25.455 "traddr": "10.0.0.1", 00:14:25.455 "trsvcid": "51746" 00:14:25.455 }, 00:14:25.455 "auth": { 00:14:25.455 "state": "completed", 00:14:25.455 "digest": "sha256", 00:14:25.455 "dhgroup": "ffdhe4096" 00:14:25.455 } 00:14:25.455 } 00:14:25.455 ]' 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.455 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.455 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.455 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.455 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.713 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:14:26.718 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.719 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.719 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.719 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.719 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.719 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.719 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.719 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.976 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.541 00:14:27.541 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.541 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.541 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.541 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.541 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.541 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.541 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.541 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.541 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.541 { 00:14:27.541 "cntlid": 29, 00:14:27.541 "qid": 0, 00:14:27.541 "state": "enabled", 00:14:27.541 "thread": "nvmf_tgt_poll_group_000", 00:14:27.541 "listen_address": { 00:14:27.541 "trtype": "TCP", 00:14:27.541 "adrfam": "IPv4", 00:14:27.541 "traddr": "10.0.0.2", 00:14:27.541 "trsvcid": "4420" 00:14:27.541 }, 00:14:27.541 "peer_address": { 00:14:27.541 "trtype": "TCP", 00:14:27.541 "adrfam": "IPv4", 00:14:27.541 "traddr": "10.0.0.1", 00:14:27.541 "trsvcid": "51768" 00:14:27.541 }, 00:14:27.541 "auth": { 00:14:27.541 "state": "completed", 00:14:27.541 "digest": "sha256", 00:14:27.541 "dhgroup": "ffdhe4096" 00:14:27.541 } 00:14:27.541 } 00:14:27.541 ]' 00:14:27.541 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.800 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.800 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.800 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.800 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.800 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.800 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.800 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.059 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.999 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.257 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.515 00:14:29.774 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.775 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.775 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.033 { 00:14:30.033 "cntlid": 31, 00:14:30.033 "qid": 0, 00:14:30.033 "state": "enabled", 00:14:30.033 "thread": "nvmf_tgt_poll_group_000", 00:14:30.033 "listen_address": { 00:14:30.033 "trtype": "TCP", 00:14:30.033 "adrfam": "IPv4", 00:14:30.033 "traddr": "10.0.0.2", 00:14:30.033 "trsvcid": "4420" 00:14:30.033 }, 00:14:30.033 "peer_address": { 00:14:30.033 "trtype": "TCP", 00:14:30.033 "adrfam": "IPv4", 00:14:30.033 "traddr": "10.0.0.1", 00:14:30.033 "trsvcid": "51802" 00:14:30.033 }, 00:14:30.033 "auth": { 00:14:30.033 "state": "completed", 00:14:30.033 "digest": "sha256", 00:14:30.033 "dhgroup": "ffdhe4096" 00:14:30.033 } 00:14:30.033 } 00:14:30.033 ]' 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.033 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.291 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.224 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.482 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.482 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.046 00:14:32.046 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.046 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.046 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.304 { 00:14:32.304 "cntlid": 33, 00:14:32.304 "qid": 0, 00:14:32.304 "state": "enabled", 00:14:32.304 "thread": "nvmf_tgt_poll_group_000", 00:14:32.304 "listen_address": { 00:14:32.304 "trtype": "TCP", 00:14:32.304 "adrfam": "IPv4", 00:14:32.304 "traddr": "10.0.0.2", 00:14:32.304 "trsvcid": "4420" 00:14:32.304 }, 00:14:32.304 "peer_address": { 00:14:32.304 "trtype": "TCP", 00:14:32.304 "adrfam": "IPv4", 00:14:32.304 "traddr": "10.0.0.1", 00:14:32.304 "trsvcid": "44956" 00:14:32.304 }, 00:14:32.304 "auth": { 00:14:32.304 "state": "completed", 00:14:32.304 "digest": "sha256", 00:14:32.304 "dhgroup": "ffdhe6144" 00:14:32.304 } 00:14:32.304 } 00:14:32.304 ]' 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.304 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.561 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.561 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.561 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.819 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.753 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.011 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.577 00:14:34.577 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.577 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.577 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.836 { 00:14:34.836 "cntlid": 35, 00:14:34.836 "qid": 0, 00:14:34.836 "state": "enabled", 00:14:34.836 "thread": "nvmf_tgt_poll_group_000", 00:14:34.836 "listen_address": { 00:14:34.836 "trtype": "TCP", 00:14:34.836 "adrfam": "IPv4", 00:14:34.836 "traddr": "10.0.0.2", 00:14:34.836 "trsvcid": "4420" 00:14:34.836 }, 00:14:34.836 "peer_address": { 00:14:34.836 "trtype": "TCP", 00:14:34.836 "adrfam": "IPv4", 00:14:34.836 "traddr": "10.0.0.1", 00:14:34.836 "trsvcid": "44994" 00:14:34.836 }, 00:14:34.836 "auth": { 00:14:34.836 "state": "completed", 00:14:34.836 "digest": "sha256", 00:14:34.836 "dhgroup": "ffdhe6144" 00:14:34.836 } 00:14:34.836 } 00:14:34.836 ]' 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.836 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.095 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:36.468 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.469 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.058 00:14:37.058 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.058 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.058 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.314 { 00:14:37.314 "cntlid": 37, 00:14:37.314 "qid": 0, 00:14:37.314 "state": "enabled", 00:14:37.314 "thread": "nvmf_tgt_poll_group_000", 00:14:37.314 "listen_address": { 00:14:37.314 "trtype": "TCP", 00:14:37.314 "adrfam": "IPv4", 00:14:37.314 "traddr": "10.0.0.2", 00:14:37.314 "trsvcid": "4420" 00:14:37.314 }, 00:14:37.314 "peer_address": { 00:14:37.314 "trtype": "TCP", 00:14:37.314 "adrfam": "IPv4", 00:14:37.314 "traddr": "10.0.0.1", 00:14:37.314 "trsvcid": "45018" 00:14:37.314 }, 00:14:37.314 "auth": { 00:14:37.314 "state": "completed", 00:14:37.314 "digest": "sha256", 00:14:37.314 "dhgroup": "ffdhe6144" 00:14:37.314 } 00:14:37.314 } 00:14:37.314 ]' 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.314 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.877 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.807 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.065 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.630 00:14:39.630 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.631 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.631 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.888 { 00:14:39.888 "cntlid": 39, 00:14:39.888 "qid": 0, 00:14:39.888 "state": "enabled", 00:14:39.888 "thread": "nvmf_tgt_poll_group_000", 00:14:39.888 "listen_address": { 00:14:39.888 "trtype": "TCP", 00:14:39.888 "adrfam": "IPv4", 00:14:39.888 "traddr": "10.0.0.2", 00:14:39.888 "trsvcid": "4420" 00:14:39.888 }, 00:14:39.888 "peer_address": { 00:14:39.888 "trtype": "TCP", 00:14:39.888 "adrfam": "IPv4", 00:14:39.888 "traddr": "10.0.0.1", 00:14:39.888 "trsvcid": "45048" 00:14:39.888 }, 00:14:39.888 "auth": { 00:14:39.888 "state": "completed", 00:14:39.888 "digest": "sha256", 00:14:39.888 "dhgroup": "ffdhe6144" 00:14:39.888 } 00:14:39.888 } 00:14:39.888 ]' 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.888 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.145 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.079 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.336 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:41.336 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.337 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.268 00:14:42.268 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.268 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.268 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.526 { 00:14:42.526 "cntlid": 41, 00:14:42.526 "qid": 0, 00:14:42.526 "state": "enabled", 00:14:42.526 "thread": "nvmf_tgt_poll_group_000", 00:14:42.526 "listen_address": { 00:14:42.526 "trtype": "TCP", 00:14:42.526 "adrfam": "IPv4", 00:14:42.526 "traddr": "10.0.0.2", 00:14:42.526 "trsvcid": "4420" 00:14:42.526 }, 00:14:42.526 "peer_address": { 00:14:42.526 "trtype": "TCP", 00:14:42.526 "adrfam": "IPv4", 00:14:42.526 "traddr": "10.0.0.1", 00:14:42.526 "trsvcid": "42380" 00:14:42.526 }, 00:14:42.526 "auth": { 00:14:42.526 "state": "completed", 00:14:42.526 "digest": "sha256", 00:14:42.526 "dhgroup": "ffdhe8192" 00:14:42.526 } 00:14:42.526 } 00:14:42.526 ]' 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.526 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.527 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.784 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.784 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.784 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.041 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.974 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.232 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.165 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.165 { 00:14:45.165 "cntlid": 43, 00:14:45.165 "qid": 0, 00:14:45.165 "state": "enabled", 00:14:45.165 "thread": "nvmf_tgt_poll_group_000", 00:14:45.165 "listen_address": { 00:14:45.165 "trtype": "TCP", 00:14:45.165 "adrfam": "IPv4", 00:14:45.165 "traddr": "10.0.0.2", 00:14:45.165 "trsvcid": "4420" 00:14:45.165 }, 00:14:45.165 "peer_address": { 00:14:45.165 "trtype": "TCP", 00:14:45.165 "adrfam": "IPv4", 00:14:45.165 "traddr": "10.0.0.1", 00:14:45.165 "trsvcid": "42406" 00:14:45.165 }, 00:14:45.165 "auth": { 00:14:45.165 "state": "completed", 00:14:45.165 "digest": "sha256", 00:14:45.165 "dhgroup": "ffdhe8192" 00:14:45.165 } 00:14:45.165 } 00:14:45.165 ]' 00:14:45.165 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.422 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.422 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.422 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.422 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.422 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.422 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.422 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.679 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.612 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.870 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.803 00:14:47.803 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.803 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.803 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.061 { 00:14:48.061 "cntlid": 45, 00:14:48.061 "qid": 0, 00:14:48.061 "state": "enabled", 00:14:48.061 "thread": "nvmf_tgt_poll_group_000", 00:14:48.061 "listen_address": { 00:14:48.061 "trtype": "TCP", 00:14:48.061 "adrfam": "IPv4", 00:14:48.061 "traddr": "10.0.0.2", 00:14:48.061 "trsvcid": "4420" 00:14:48.061 }, 00:14:48.061 "peer_address": { 00:14:48.061 "trtype": "TCP", 00:14:48.061 "adrfam": "IPv4", 00:14:48.061 "traddr": "10.0.0.1", 00:14:48.061 "trsvcid": "42424" 00:14:48.061 }, 00:14:48.061 "auth": { 00:14:48.061 "state": "completed", 00:14:48.061 "digest": "sha256", 00:14:48.061 "dhgroup": "ffdhe8192" 00:14:48.061 } 00:14:48.061 } 00:14:48.061 ]' 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.061 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.319 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.693 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.693 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.694 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.694 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.627 00:14:50.627 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.627 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.627 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.885 { 00:14:50.885 "cntlid": 47, 00:14:50.885 "qid": 0, 00:14:50.885 "state": "enabled", 00:14:50.885 "thread": "nvmf_tgt_poll_group_000", 00:14:50.885 "listen_address": { 00:14:50.885 "trtype": "TCP", 00:14:50.885 "adrfam": "IPv4", 00:14:50.885 "traddr": "10.0.0.2", 00:14:50.885 "trsvcid": "4420" 00:14:50.885 }, 00:14:50.885 "peer_address": { 00:14:50.885 "trtype": "TCP", 00:14:50.885 "adrfam": "IPv4", 00:14:50.885 "traddr": "10.0.0.1", 00:14:50.885 "trsvcid": "55246" 00:14:50.885 }, 00:14:50.885 "auth": { 00:14:50.885 "state": "completed", 00:14:50.885 "digest": "sha256", 00:14:50.885 "dhgroup": "ffdhe8192" 00:14:50.885 } 00:14:50.885 } 00:14:50.885 ]' 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.885 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.451 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.382 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.947 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.947 { 00:14:52.947 "cntlid": 49, 00:14:52.947 "qid": 0, 00:14:52.947 "state": "enabled", 00:14:52.947 "thread": "nvmf_tgt_poll_group_000", 00:14:52.947 "listen_address": { 00:14:52.947 "trtype": "TCP", 00:14:52.947 "adrfam": "IPv4", 00:14:52.947 "traddr": "10.0.0.2", 00:14:52.947 "trsvcid": "4420" 00:14:52.947 }, 00:14:52.947 "peer_address": { 00:14:52.947 "trtype": "TCP", 00:14:52.947 "adrfam": "IPv4", 00:14:52.947 "traddr": "10.0.0.1", 00:14:52.947 "trsvcid": "55270" 00:14:52.947 }, 00:14:52.947 "auth": { 00:14:52.947 "state": "completed", 00:14:52.947 "digest": "sha384", 00:14:52.947 "dhgroup": "null" 00:14:52.947 } 00:14:52.947 } 00:14:52.947 ]' 00:14:52.947 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.205 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.205 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.205 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:53.205 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.205 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.205 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.205 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.463 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.397 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.655 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.913 00:14:54.913 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.913 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.913 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.171 { 00:14:55.171 "cntlid": 51, 00:14:55.171 "qid": 0, 00:14:55.171 "state": "enabled", 00:14:55.171 "thread": "nvmf_tgt_poll_group_000", 00:14:55.171 "listen_address": { 00:14:55.171 "trtype": "TCP", 00:14:55.171 "adrfam": "IPv4", 00:14:55.171 "traddr": "10.0.0.2", 00:14:55.171 "trsvcid": "4420" 00:14:55.171 }, 00:14:55.171 "peer_address": { 00:14:55.171 "trtype": "TCP", 00:14:55.171 "adrfam": "IPv4", 00:14:55.171 "traddr": "10.0.0.1", 00:14:55.171 "trsvcid": "55296" 00:14:55.171 }, 00:14:55.171 "auth": { 00:14:55.171 "state": "completed", 00:14:55.171 "digest": "sha384", 00:14:55.171 "dhgroup": "null" 00:14:55.171 } 00:14:55.171 } 00:14:55.171 ]' 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:55.171 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.429 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.429 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.429 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.687 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:14:56.655 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.655 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.655 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.655 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.655 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.655 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.655 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.655 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.912 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.913 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.170 00:14:57.170 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.170 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.170 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.428 { 00:14:57.428 "cntlid": 53, 00:14:57.428 "qid": 0, 00:14:57.428 "state": "enabled", 00:14:57.428 "thread": "nvmf_tgt_poll_group_000", 00:14:57.428 "listen_address": { 00:14:57.428 "trtype": "TCP", 00:14:57.428 "adrfam": "IPv4", 00:14:57.428 "traddr": "10.0.0.2", 00:14:57.428 "trsvcid": "4420" 00:14:57.428 }, 00:14:57.428 "peer_address": { 00:14:57.428 "trtype": "TCP", 00:14:57.428 "adrfam": "IPv4", 00:14:57.428 "traddr": "10.0.0.1", 00:14:57.428 "trsvcid": "55324" 00:14:57.428 }, 00:14:57.428 "auth": { 00:14:57.428 "state": "completed", 00:14:57.428 "digest": "sha384", 00:14:57.428 "dhgroup": "null" 00:14:57.428 } 00:14:57.428 } 00:14:57.428 ]' 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:57.428 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.428 00:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.428 00:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.428 00:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.685 00:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.616 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.873 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.437 00:14:59.437 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.437 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.437 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.437 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.437 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.437 00:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.437 00:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.437 00:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.437 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.437 { 00:14:59.437 "cntlid": 55, 00:14:59.437 "qid": 0, 00:14:59.437 "state": "enabled", 00:14:59.437 "thread": "nvmf_tgt_poll_group_000", 00:14:59.437 "listen_address": { 00:14:59.437 "trtype": "TCP", 00:14:59.437 "adrfam": "IPv4", 00:14:59.437 "traddr": "10.0.0.2", 00:14:59.437 "trsvcid": "4420" 00:14:59.437 }, 00:14:59.437 "peer_address": { 00:14:59.437 "trtype": "TCP", 00:14:59.437 "adrfam": "IPv4", 00:14:59.437 "traddr": "10.0.0.1", 00:14:59.437 "trsvcid": "55344" 00:14:59.437 }, 00:14:59.437 "auth": { 00:14:59.437 "state": "completed", 00:14:59.437 "digest": "sha384", 00:14:59.437 "dhgroup": "null" 00:14:59.437 } 00:14:59.437 } 00:14:59.437 ]' 00:14:59.437 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.694 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.694 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.694 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:59.694 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.694 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.694 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.694 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.951 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.881 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.138 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.701 00:15:01.701 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.701 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.701 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.957 { 00:15:01.957 "cntlid": 57, 00:15:01.957 "qid": 0, 00:15:01.957 "state": "enabled", 00:15:01.957 "thread": "nvmf_tgt_poll_group_000", 00:15:01.957 "listen_address": { 00:15:01.957 "trtype": "TCP", 00:15:01.957 "adrfam": "IPv4", 00:15:01.957 "traddr": "10.0.0.2", 00:15:01.957 "trsvcid": "4420" 00:15:01.957 }, 00:15:01.957 "peer_address": { 00:15:01.957 "trtype": "TCP", 00:15:01.957 "adrfam": "IPv4", 00:15:01.957 "traddr": "10.0.0.1", 00:15:01.957 "trsvcid": "39250" 00:15:01.957 }, 00:15:01.957 "auth": { 00:15:01.957 "state": "completed", 00:15:01.957 "digest": "sha384", 00:15:01.957 "dhgroup": "ffdhe2048" 00:15:01.957 } 00:15:01.957 } 00:15:01.957 ]' 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.957 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.213 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.142 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.400 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.965 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.965 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.965 { 00:15:03.965 "cntlid": 59, 00:15:03.965 "qid": 0, 00:15:03.965 "state": "enabled", 00:15:03.965 "thread": "nvmf_tgt_poll_group_000", 00:15:03.965 "listen_address": { 00:15:03.965 "trtype": "TCP", 00:15:03.965 "adrfam": "IPv4", 00:15:03.965 "traddr": "10.0.0.2", 00:15:03.965 "trsvcid": "4420" 00:15:03.965 }, 00:15:03.965 "peer_address": { 00:15:03.965 "trtype": "TCP", 00:15:03.965 "adrfam": "IPv4", 00:15:03.965 "traddr": "10.0.0.1", 00:15:03.965 "trsvcid": "39284" 00:15:03.965 }, 00:15:03.965 "auth": { 00:15:03.965 "state": "completed", 00:15:03.965 "digest": "sha384", 00:15:03.965 "dhgroup": "ffdhe2048" 00:15:03.965 } 00:15:03.965 } 00:15:03.965 ]' 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.222 00:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.480 00:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.413 00:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.671 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.237 00:15:06.237 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.237 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.237 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.494 { 00:15:06.494 "cntlid": 61, 00:15:06.494 "qid": 0, 00:15:06.494 "state": "enabled", 00:15:06.494 "thread": "nvmf_tgt_poll_group_000", 00:15:06.494 "listen_address": { 00:15:06.494 "trtype": "TCP", 00:15:06.494 "adrfam": "IPv4", 00:15:06.494 "traddr": "10.0.0.2", 00:15:06.494 "trsvcid": "4420" 00:15:06.494 }, 00:15:06.494 "peer_address": { 00:15:06.494 "trtype": "TCP", 00:15:06.494 "adrfam": "IPv4", 00:15:06.494 "traddr": "10.0.0.1", 00:15:06.494 "trsvcid": "39308" 00:15:06.494 }, 00:15:06.494 "auth": { 00:15:06.494 "state": "completed", 00:15:06.494 "digest": "sha384", 00:15:06.494 "dhgroup": "ffdhe2048" 00:15:06.494 } 00:15:06.494 } 00:15:06.494 ]' 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.494 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.495 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.495 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.495 00:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.752 00:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:15:07.686 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.686 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.686 00:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.686 00:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.943 00:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.944 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.944 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.944 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.202 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.460 00:15:08.460 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.460 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.460 00:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.718 { 00:15:08.718 "cntlid": 63, 00:15:08.718 "qid": 0, 00:15:08.718 "state": "enabled", 00:15:08.718 "thread": "nvmf_tgt_poll_group_000", 00:15:08.718 "listen_address": { 00:15:08.718 "trtype": "TCP", 00:15:08.718 "adrfam": "IPv4", 00:15:08.718 "traddr": "10.0.0.2", 00:15:08.718 "trsvcid": "4420" 00:15:08.718 }, 00:15:08.718 "peer_address": { 00:15:08.718 "trtype": "TCP", 00:15:08.718 "adrfam": "IPv4", 00:15:08.718 "traddr": "10.0.0.1", 00:15:08.718 "trsvcid": "39326" 00:15:08.718 }, 00:15:08.718 "auth": { 00:15:08.718 "state": "completed", 00:15:08.718 "digest": "sha384", 00:15:08.718 "dhgroup": "ffdhe2048" 00:15:08.718 } 00:15:08.718 } 00:15:08.718 ]' 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.718 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.976 00:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.351 00:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.610 00:15:10.610 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.610 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.610 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.875 { 00:15:10.875 "cntlid": 65, 00:15:10.875 "qid": 0, 00:15:10.875 "state": "enabled", 00:15:10.875 "thread": "nvmf_tgt_poll_group_000", 00:15:10.875 "listen_address": { 00:15:10.875 "trtype": "TCP", 00:15:10.875 "adrfam": "IPv4", 00:15:10.875 "traddr": "10.0.0.2", 00:15:10.875 "trsvcid": "4420" 00:15:10.875 }, 00:15:10.875 "peer_address": { 00:15:10.875 "trtype": "TCP", 00:15:10.875 "adrfam": "IPv4", 00:15:10.875 "traddr": "10.0.0.1", 00:15:10.875 "trsvcid": "49288" 00:15:10.875 }, 00:15:10.875 "auth": { 00:15:10.875 "state": "completed", 00:15:10.875 "digest": "sha384", 00:15:10.875 "dhgroup": "ffdhe3072" 00:15:10.875 } 00:15:10.875 } 00:15:10.875 ]' 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.875 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.176 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:11.176 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.176 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.176 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.176 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.434 00:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:15:12.367 00:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.367 00:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.367 00:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.367 00:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.367 00:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.367 00:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.367 00:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.368 00:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.626 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.885 00:15:12.885 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.885 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.885 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.143 { 00:15:13.143 "cntlid": 67, 00:15:13.143 "qid": 0, 00:15:13.143 "state": "enabled", 00:15:13.143 "thread": "nvmf_tgt_poll_group_000", 00:15:13.143 "listen_address": { 00:15:13.143 "trtype": "TCP", 00:15:13.143 "adrfam": "IPv4", 00:15:13.143 "traddr": "10.0.0.2", 00:15:13.143 "trsvcid": "4420" 00:15:13.143 }, 00:15:13.143 "peer_address": { 00:15:13.143 "trtype": "TCP", 00:15:13.143 "adrfam": "IPv4", 00:15:13.143 "traddr": "10.0.0.1", 00:15:13.143 "trsvcid": "49306" 00:15:13.143 }, 00:15:13.143 "auth": { 00:15:13.143 "state": "completed", 00:15:13.143 "digest": "sha384", 00:15:13.143 "dhgroup": "ffdhe3072" 00:15:13.143 } 00:15:13.143 } 00:15:13.143 ]' 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.143 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.401 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:13.401 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.401 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.401 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.401 00:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.660 00:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:15:14.595 00:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.595 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.595 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.595 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.595 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.595 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.595 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.595 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.853 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.111 00:15:15.111 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.111 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.111 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.369 { 00:15:15.369 "cntlid": 69, 00:15:15.369 "qid": 0, 00:15:15.369 "state": "enabled", 00:15:15.369 "thread": "nvmf_tgt_poll_group_000", 00:15:15.369 "listen_address": { 00:15:15.369 "trtype": "TCP", 00:15:15.369 "adrfam": "IPv4", 00:15:15.369 "traddr": "10.0.0.2", 00:15:15.369 "trsvcid": "4420" 00:15:15.369 }, 00:15:15.369 "peer_address": { 00:15:15.369 "trtype": "TCP", 00:15:15.369 "adrfam": "IPv4", 00:15:15.369 "traddr": "10.0.0.1", 00:15:15.369 "trsvcid": "49326" 00:15:15.369 }, 00:15:15.369 "auth": { 00:15:15.369 "state": "completed", 00:15:15.369 "digest": "sha384", 00:15:15.369 "dhgroup": "ffdhe3072" 00:15:15.369 } 00:15:15.369 } 00:15:15.369 ]' 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.369 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.628 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.628 00:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.628 00:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.628 00:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.628 00:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.886 00:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:15:16.820 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.820 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.820 00:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.820 00:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.820 00:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.820 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.821 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.821 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.079 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.336 00:15:17.336 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.336 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.336 00:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.594 { 00:15:17.594 "cntlid": 71, 00:15:17.594 "qid": 0, 00:15:17.594 "state": "enabled", 00:15:17.594 "thread": "nvmf_tgt_poll_group_000", 00:15:17.594 "listen_address": { 00:15:17.594 "trtype": "TCP", 00:15:17.594 "adrfam": "IPv4", 00:15:17.594 "traddr": "10.0.0.2", 00:15:17.594 "trsvcid": "4420" 00:15:17.594 }, 00:15:17.594 "peer_address": { 00:15:17.594 "trtype": "TCP", 00:15:17.594 "adrfam": "IPv4", 00:15:17.594 "traddr": "10.0.0.1", 00:15:17.594 "trsvcid": "49352" 00:15:17.594 }, 00:15:17.594 "auth": { 00:15:17.594 "state": "completed", 00:15:17.594 "digest": "sha384", 00:15:17.594 "dhgroup": "ffdhe3072" 00:15:17.594 } 00:15:17.594 } 00:15:17.594 ]' 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.594 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.851 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.851 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.851 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.109 00:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.041 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.298 00:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.556 00:15:19.814 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.814 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.814 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.814 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.814 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.814 00:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.814 00:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.072 { 00:15:20.072 "cntlid": 73, 00:15:20.072 "qid": 0, 00:15:20.072 "state": "enabled", 00:15:20.072 "thread": "nvmf_tgt_poll_group_000", 00:15:20.072 "listen_address": { 00:15:20.072 "trtype": "TCP", 00:15:20.072 "adrfam": "IPv4", 00:15:20.072 "traddr": "10.0.0.2", 00:15:20.072 "trsvcid": "4420" 00:15:20.072 }, 00:15:20.072 "peer_address": { 00:15:20.072 "trtype": "TCP", 00:15:20.072 "adrfam": "IPv4", 00:15:20.072 "traddr": "10.0.0.1", 00:15:20.072 "trsvcid": "49374" 00:15:20.072 }, 00:15:20.072 "auth": { 00:15:20.072 "state": "completed", 00:15:20.072 "digest": "sha384", 00:15:20.072 "dhgroup": "ffdhe4096" 00:15:20.072 } 00:15:20.072 } 00:15:20.072 ]' 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.072 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.331 00:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:21.267 00:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.525 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.106 00:15:22.106 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.106 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.106 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.106 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.106 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.106 00:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.106 00:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.107 00:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.107 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.107 { 00:15:22.107 "cntlid": 75, 00:15:22.107 "qid": 0, 00:15:22.107 "state": "enabled", 00:15:22.107 "thread": "nvmf_tgt_poll_group_000", 00:15:22.107 "listen_address": { 00:15:22.107 "trtype": "TCP", 00:15:22.107 "adrfam": "IPv4", 00:15:22.107 "traddr": "10.0.0.2", 00:15:22.107 "trsvcid": "4420" 00:15:22.107 }, 00:15:22.107 "peer_address": { 00:15:22.107 "trtype": "TCP", 00:15:22.107 "adrfam": "IPv4", 00:15:22.107 "traddr": "10.0.0.1", 00:15:22.107 "trsvcid": "44036" 00:15:22.107 }, 00:15:22.107 "auth": { 00:15:22.107 "state": "completed", 00:15:22.107 "digest": "sha384", 00:15:22.107 "dhgroup": "ffdhe4096" 00:15:22.107 } 00:15:22.107 } 00:15:22.107 ]' 00:15:22.107 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.364 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.364 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.364 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.364 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.364 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.364 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.364 00:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.621 00:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.557 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.814 00:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.815 00:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.815 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.815 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.381 00:15:24.381 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.381 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.381 00:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.638 { 00:15:24.638 "cntlid": 77, 00:15:24.638 "qid": 0, 00:15:24.638 "state": "enabled", 00:15:24.638 "thread": "nvmf_tgt_poll_group_000", 00:15:24.638 "listen_address": { 00:15:24.638 "trtype": "TCP", 00:15:24.638 "adrfam": "IPv4", 00:15:24.638 "traddr": "10.0.0.2", 00:15:24.638 "trsvcid": "4420" 00:15:24.638 }, 00:15:24.638 "peer_address": { 00:15:24.638 "trtype": "TCP", 00:15:24.638 "adrfam": "IPv4", 00:15:24.638 "traddr": "10.0.0.1", 00:15:24.638 "trsvcid": "44058" 00:15:24.638 }, 00:15:24.638 "auth": { 00:15:24.638 "state": "completed", 00:15:24.638 "digest": "sha384", 00:15:24.638 "dhgroup": "ffdhe4096" 00:15:24.638 } 00:15:24.638 } 00:15:24.638 ]' 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.638 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.895 00:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.829 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.135 00:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.704 00:15:26.704 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.704 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.704 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.963 { 00:15:26.963 "cntlid": 79, 00:15:26.963 "qid": 0, 00:15:26.963 "state": "enabled", 00:15:26.963 "thread": "nvmf_tgt_poll_group_000", 00:15:26.963 "listen_address": { 00:15:26.963 "trtype": "TCP", 00:15:26.963 "adrfam": "IPv4", 00:15:26.963 "traddr": "10.0.0.2", 00:15:26.963 "trsvcid": "4420" 00:15:26.963 }, 00:15:26.963 "peer_address": { 00:15:26.963 "trtype": "TCP", 00:15:26.963 "adrfam": "IPv4", 00:15:26.963 "traddr": "10.0.0.1", 00:15:26.963 "trsvcid": "44076" 00:15:26.963 }, 00:15:26.963 "auth": { 00:15:26.963 "state": "completed", 00:15:26.963 "digest": "sha384", 00:15:26.963 "dhgroup": "ffdhe4096" 00:15:26.963 } 00:15:26.963 } 00:15:26.963 ]' 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.963 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.221 00:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.156 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.413 00:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.981 00:15:28.982 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.982 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.982 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.239 { 00:15:29.239 "cntlid": 81, 00:15:29.239 "qid": 0, 00:15:29.239 "state": "enabled", 00:15:29.239 "thread": "nvmf_tgt_poll_group_000", 00:15:29.239 "listen_address": { 00:15:29.239 "trtype": "TCP", 00:15:29.239 "adrfam": "IPv4", 00:15:29.239 "traddr": "10.0.0.2", 00:15:29.239 "trsvcid": "4420" 00:15:29.239 }, 00:15:29.239 "peer_address": { 00:15:29.239 "trtype": "TCP", 00:15:29.239 "adrfam": "IPv4", 00:15:29.239 "traddr": "10.0.0.1", 00:15:29.239 "trsvcid": "44100" 00:15:29.239 }, 00:15:29.239 "auth": { 00:15:29.239 "state": "completed", 00:15:29.239 "digest": "sha384", 00:15:29.239 "dhgroup": "ffdhe6144" 00:15:29.239 } 00:15:29.239 } 00:15:29.239 ]' 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.239 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.497 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.497 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.497 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.497 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.497 00:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.755 00:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.691 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.949 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:30.949 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.949 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.949 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.950 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.517 00:15:31.517 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.517 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.517 00:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.774 { 00:15:31.774 "cntlid": 83, 00:15:31.774 "qid": 0, 00:15:31.774 "state": "enabled", 00:15:31.774 "thread": "nvmf_tgt_poll_group_000", 00:15:31.774 "listen_address": { 00:15:31.774 "trtype": "TCP", 00:15:31.774 "adrfam": "IPv4", 00:15:31.774 "traddr": "10.0.0.2", 00:15:31.774 "trsvcid": "4420" 00:15:31.774 }, 00:15:31.774 "peer_address": { 00:15:31.774 "trtype": "TCP", 00:15:31.774 "adrfam": "IPv4", 00:15:31.774 "traddr": "10.0.0.1", 00:15:31.774 "trsvcid": "51058" 00:15:31.774 }, 00:15:31.774 "auth": { 00:15:31.774 "state": "completed", 00:15:31.774 "digest": "sha384", 00:15:31.774 "dhgroup": "ffdhe6144" 00:15:31.774 } 00:15:31.774 } 00:15:31.774 ]' 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.774 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.031 00:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.406 00:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.973 00:15:33.973 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.973 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.973 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.229 { 00:15:34.229 "cntlid": 85, 00:15:34.229 "qid": 0, 00:15:34.229 "state": "enabled", 00:15:34.229 "thread": "nvmf_tgt_poll_group_000", 00:15:34.229 "listen_address": { 00:15:34.229 "trtype": "TCP", 00:15:34.229 "adrfam": "IPv4", 00:15:34.229 "traddr": "10.0.0.2", 00:15:34.229 "trsvcid": "4420" 00:15:34.229 }, 00:15:34.229 "peer_address": { 00:15:34.229 "trtype": "TCP", 00:15:34.229 "adrfam": "IPv4", 00:15:34.229 "traddr": "10.0.0.1", 00:15:34.229 "trsvcid": "51074" 00:15:34.229 }, 00:15:34.229 "auth": { 00:15:34.229 "state": "completed", 00:15:34.229 "digest": "sha384", 00:15:34.229 "dhgroup": "ffdhe6144" 00:15:34.229 } 00:15:34.229 } 00:15:34.229 ]' 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.229 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.485 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.485 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.486 00:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.744 00:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:35.712 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.277 00:15:36.277 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.277 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.277 00:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.534 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.534 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.534 00:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.534 00:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.534 00:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.534 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.534 { 00:15:36.534 "cntlid": 87, 00:15:36.534 "qid": 0, 00:15:36.534 "state": "enabled", 00:15:36.534 "thread": "nvmf_tgt_poll_group_000", 00:15:36.534 "listen_address": { 00:15:36.534 "trtype": "TCP", 00:15:36.534 "adrfam": "IPv4", 00:15:36.534 "traddr": "10.0.0.2", 00:15:36.534 "trsvcid": "4420" 00:15:36.534 }, 00:15:36.534 "peer_address": { 00:15:36.534 "trtype": "TCP", 00:15:36.534 "adrfam": "IPv4", 00:15:36.534 "traddr": "10.0.0.1", 00:15:36.534 "trsvcid": "51090" 00:15:36.534 }, 00:15:36.534 "auth": { 00:15:36.534 "state": "completed", 00:15:36.534 "digest": "sha384", 00:15:36.534 "dhgroup": "ffdhe6144" 00:15:36.534 } 00:15:36.534 } 00:15:36.534 ]' 00:15:36.534 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.790 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.790 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.790 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.791 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.791 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.791 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.791 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.048 00:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.982 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.239 00:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.177 00:15:39.177 00:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.177 00:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.177 00:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.434 00:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.434 00:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.434 00:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.434 00:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 00:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.434 00:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.434 { 00:15:39.434 "cntlid": 89, 00:15:39.434 "qid": 0, 00:15:39.434 "state": "enabled", 00:15:39.434 "thread": "nvmf_tgt_poll_group_000", 00:15:39.434 "listen_address": { 00:15:39.434 "trtype": "TCP", 00:15:39.434 "adrfam": "IPv4", 00:15:39.434 "traddr": "10.0.0.2", 00:15:39.434 "trsvcid": "4420" 00:15:39.434 }, 00:15:39.434 "peer_address": { 00:15:39.434 "trtype": "TCP", 00:15:39.434 "adrfam": "IPv4", 00:15:39.434 "traddr": "10.0.0.1", 00:15:39.434 "trsvcid": "51116" 00:15:39.434 }, 00:15:39.434 "auth": { 00:15:39.434 "state": "completed", 00:15:39.434 "digest": "sha384", 00:15:39.434 "dhgroup": "ffdhe8192" 00:15:39.434 } 00:15:39.434 } 00:15:39.434 ]' 00:15:39.434 00:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.434 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.434 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.434 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.434 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.691 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.691 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.691 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.948 00:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.879 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.136 00:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.095 00:15:42.095 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.095 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.095 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.352 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.352 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.352 00:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.352 00:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.352 00:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.352 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.352 { 00:15:42.352 "cntlid": 91, 00:15:42.352 "qid": 0, 00:15:42.352 "state": "enabled", 00:15:42.352 "thread": "nvmf_tgt_poll_group_000", 00:15:42.353 "listen_address": { 00:15:42.353 "trtype": "TCP", 00:15:42.353 "adrfam": "IPv4", 00:15:42.353 "traddr": "10.0.0.2", 00:15:42.353 "trsvcid": "4420" 00:15:42.353 }, 00:15:42.353 "peer_address": { 00:15:42.353 "trtype": "TCP", 00:15:42.353 "adrfam": "IPv4", 00:15:42.353 "traddr": "10.0.0.1", 00:15:42.353 "trsvcid": "42438" 00:15:42.353 }, 00:15:42.353 "auth": { 00:15:42.353 "state": "completed", 00:15:42.353 "digest": "sha384", 00:15:42.353 "dhgroup": "ffdhe8192" 00:15:42.353 } 00:15:42.353 } 00:15:42.353 ]' 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.353 00:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.612 00:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.989 00:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.927 00:15:44.927 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.927 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.927 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.186 { 00:15:45.186 "cntlid": 93, 00:15:45.186 "qid": 0, 00:15:45.186 "state": "enabled", 00:15:45.186 "thread": "nvmf_tgt_poll_group_000", 00:15:45.186 "listen_address": { 00:15:45.186 "trtype": "TCP", 00:15:45.186 "adrfam": "IPv4", 00:15:45.186 "traddr": "10.0.0.2", 00:15:45.186 "trsvcid": "4420" 00:15:45.186 }, 00:15:45.186 "peer_address": { 00:15:45.186 "trtype": "TCP", 00:15:45.186 "adrfam": "IPv4", 00:15:45.186 "traddr": "10.0.0.1", 00:15:45.186 "trsvcid": "42466" 00:15:45.186 }, 00:15:45.186 "auth": { 00:15:45.186 "state": "completed", 00:15:45.186 "digest": "sha384", 00:15:45.186 "dhgroup": "ffdhe8192" 00:15:45.186 } 00:15:45.186 } 00:15:45.186 ]' 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.186 00:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.444 00:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:15:46.378 00:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.378 00:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.378 00:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.378 00:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.635 00:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.635 00:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.635 00:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.635 00:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.893 00:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.852 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.852 { 00:15:47.852 "cntlid": 95, 00:15:47.852 "qid": 0, 00:15:47.852 "state": "enabled", 00:15:47.852 "thread": "nvmf_tgt_poll_group_000", 00:15:47.852 "listen_address": { 00:15:47.852 "trtype": "TCP", 00:15:47.852 "adrfam": "IPv4", 00:15:47.852 "traddr": "10.0.0.2", 00:15:47.852 "trsvcid": "4420" 00:15:47.852 }, 00:15:47.852 "peer_address": { 00:15:47.852 "trtype": "TCP", 00:15:47.852 "adrfam": "IPv4", 00:15:47.852 "traddr": "10.0.0.1", 00:15:47.852 "trsvcid": "42480" 00:15:47.852 }, 00:15:47.852 "auth": { 00:15:47.852 "state": "completed", 00:15:47.852 "digest": "sha384", 00:15:47.852 "dhgroup": "ffdhe8192" 00:15:47.852 } 00:15:47.852 } 00:15:47.852 ]' 00:15:47.852 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.110 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.110 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.110 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.110 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.110 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.110 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.110 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.368 00:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.301 00:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.558 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.814 00:15:49.814 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.814 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.814 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.071 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.071 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.071 00:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.071 00:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.327 { 00:15:50.327 "cntlid": 97, 00:15:50.327 "qid": 0, 00:15:50.327 "state": "enabled", 00:15:50.327 "thread": "nvmf_tgt_poll_group_000", 00:15:50.327 "listen_address": { 00:15:50.327 "trtype": "TCP", 00:15:50.327 "adrfam": "IPv4", 00:15:50.327 "traddr": "10.0.0.2", 00:15:50.327 "trsvcid": "4420" 00:15:50.327 }, 00:15:50.327 "peer_address": { 00:15:50.327 "trtype": "TCP", 00:15:50.327 "adrfam": "IPv4", 00:15:50.327 "traddr": "10.0.0.1", 00:15:50.327 "trsvcid": "33912" 00:15:50.327 }, 00:15:50.327 "auth": { 00:15:50.327 "state": "completed", 00:15:50.327 "digest": "sha512", 00:15:50.327 "dhgroup": "null" 00:15:50.327 } 00:15:50.327 } 00:15:50.327 ]' 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.327 00:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.584 00:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.514 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.793 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.050 00:15:52.308 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.308 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.308 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.308 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.308 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.308 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.308 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.565 00:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.565 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.565 { 00:15:52.565 "cntlid": 99, 00:15:52.565 "qid": 0, 00:15:52.565 "state": "enabled", 00:15:52.565 "thread": "nvmf_tgt_poll_group_000", 00:15:52.565 "listen_address": { 00:15:52.565 "trtype": "TCP", 00:15:52.565 "adrfam": "IPv4", 00:15:52.565 "traddr": "10.0.0.2", 00:15:52.565 "trsvcid": "4420" 00:15:52.565 }, 00:15:52.565 "peer_address": { 00:15:52.565 "trtype": "TCP", 00:15:52.565 "adrfam": "IPv4", 00:15:52.565 "traddr": "10.0.0.1", 00:15:52.565 "trsvcid": "33932" 00:15:52.565 }, 00:15:52.565 "auth": { 00:15:52.565 "state": "completed", 00:15:52.565 "digest": "sha512", 00:15:52.565 "dhgroup": "null" 00:15:52.565 } 00:15:52.565 } 00:15:52.565 ]' 00:15:52.565 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.565 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.565 00:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.565 00:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:52.565 00:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.565 00:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.565 00:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.565 00:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.823 00:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.774 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.031 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.288 00:15:54.288 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.289 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.289 00:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.546 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.546 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.546 00:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.546 00:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.546 00:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.546 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.546 { 00:15:54.546 "cntlid": 101, 00:15:54.546 "qid": 0, 00:15:54.546 "state": "enabled", 00:15:54.547 "thread": "nvmf_tgt_poll_group_000", 00:15:54.547 "listen_address": { 00:15:54.547 "trtype": "TCP", 00:15:54.547 "adrfam": "IPv4", 00:15:54.547 "traddr": "10.0.0.2", 00:15:54.547 "trsvcid": "4420" 00:15:54.547 }, 00:15:54.547 "peer_address": { 00:15:54.547 "trtype": "TCP", 00:15:54.547 "adrfam": "IPv4", 00:15:54.547 "traddr": "10.0.0.1", 00:15:54.547 "trsvcid": "33956" 00:15:54.547 }, 00:15:54.547 "auth": { 00:15:54.547 "state": "completed", 00:15:54.547 "digest": "sha512", 00:15:54.547 "dhgroup": "null" 00:15:54.547 } 00:15:54.547 } 00:15:54.547 ]' 00:15:54.547 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.804 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.804 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.804 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:54.804 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.804 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.804 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.804 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.061 00:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.994 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.251 00:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.508 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.770 00:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.070 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.070 { 00:15:57.070 "cntlid": 103, 00:15:57.070 "qid": 0, 00:15:57.070 "state": "enabled", 00:15:57.070 "thread": "nvmf_tgt_poll_group_000", 00:15:57.070 "listen_address": { 00:15:57.070 "trtype": "TCP", 00:15:57.070 "adrfam": "IPv4", 00:15:57.070 "traddr": "10.0.0.2", 00:15:57.070 "trsvcid": "4420" 00:15:57.070 }, 00:15:57.070 "peer_address": { 00:15:57.070 "trtype": "TCP", 00:15:57.070 "adrfam": "IPv4", 00:15:57.070 "traddr": "10.0.0.1", 00:15:57.070 "trsvcid": "33968" 00:15:57.070 }, 00:15:57.070 "auth": { 00:15:57.070 "state": "completed", 00:15:57.070 "digest": "sha512", 00:15:57.070 "dhgroup": "null" 00:15:57.070 } 00:15:57.070 } 00:15:57.070 ]' 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.071 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.327 00:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.256 00:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.535 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.793 00:15:58.793 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.793 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.793 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.049 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.050 { 00:15:59.050 "cntlid": 105, 00:15:59.050 "qid": 0, 00:15:59.050 "state": "enabled", 00:15:59.050 "thread": "nvmf_tgt_poll_group_000", 00:15:59.050 "listen_address": { 00:15:59.050 "trtype": "TCP", 00:15:59.050 "adrfam": "IPv4", 00:15:59.050 "traddr": "10.0.0.2", 00:15:59.050 "trsvcid": "4420" 00:15:59.050 }, 00:15:59.050 "peer_address": { 00:15:59.050 "trtype": "TCP", 00:15:59.050 "adrfam": "IPv4", 00:15:59.050 "traddr": "10.0.0.1", 00:15:59.050 "trsvcid": "33996" 00:15:59.050 }, 00:15:59.050 "auth": { 00:15:59.050 "state": "completed", 00:15:59.050 "digest": "sha512", 00:15:59.050 "dhgroup": "ffdhe2048" 00:15:59.050 } 00:15:59.050 } 00:15:59.050 ]' 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.050 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.307 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.307 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.307 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.307 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.307 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.564 00:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.496 00:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.753 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.011 00:16:01.011 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.011 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.011 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.269 { 00:16:01.269 "cntlid": 107, 00:16:01.269 "qid": 0, 00:16:01.269 "state": "enabled", 00:16:01.269 "thread": "nvmf_tgt_poll_group_000", 00:16:01.269 "listen_address": { 00:16:01.269 "trtype": "TCP", 00:16:01.269 "adrfam": "IPv4", 00:16:01.269 "traddr": "10.0.0.2", 00:16:01.269 "trsvcid": "4420" 00:16:01.269 }, 00:16:01.269 "peer_address": { 00:16:01.269 "trtype": "TCP", 00:16:01.269 "adrfam": "IPv4", 00:16:01.269 "traddr": "10.0.0.1", 00:16:01.269 "trsvcid": "32838" 00:16:01.269 }, 00:16:01.269 "auth": { 00:16:01.269 "state": "completed", 00:16:01.269 "digest": "sha512", 00:16:01.269 "dhgroup": "ffdhe2048" 00:16:01.269 } 00:16:01.269 } 00:16:01.269 ]' 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.269 00:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.527 00:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:16:02.459 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.717 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.717 00:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.717 00:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 00:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.717 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.717 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.717 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.975 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.234 00:16:03.234 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.234 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.234 00:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.492 { 00:16:03.492 "cntlid": 109, 00:16:03.492 "qid": 0, 00:16:03.492 "state": "enabled", 00:16:03.492 "thread": "nvmf_tgt_poll_group_000", 00:16:03.492 "listen_address": { 00:16:03.492 "trtype": "TCP", 00:16:03.492 "adrfam": "IPv4", 00:16:03.492 "traddr": "10.0.0.2", 00:16:03.492 "trsvcid": "4420" 00:16:03.492 }, 00:16:03.492 "peer_address": { 00:16:03.492 "trtype": "TCP", 00:16:03.492 "adrfam": "IPv4", 00:16:03.492 "traddr": "10.0.0.1", 00:16:03.492 "trsvcid": "32864" 00:16:03.492 }, 00:16:03.492 "auth": { 00:16:03.492 "state": "completed", 00:16:03.492 "digest": "sha512", 00:16:03.492 "dhgroup": "ffdhe2048" 00:16:03.492 } 00:16:03.492 } 00:16:03.492 ]' 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.492 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.749 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.749 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.749 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.749 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.749 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.007 00:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.941 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.210 00:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.468 00:16:05.468 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.468 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.468 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.725 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.725 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.725 00:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.725 00:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.725 00:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.725 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.725 { 00:16:05.725 "cntlid": 111, 00:16:05.725 "qid": 0, 00:16:05.725 "state": "enabled", 00:16:05.725 "thread": "nvmf_tgt_poll_group_000", 00:16:05.725 "listen_address": { 00:16:05.725 "trtype": "TCP", 00:16:05.725 "adrfam": "IPv4", 00:16:05.725 "traddr": "10.0.0.2", 00:16:05.725 "trsvcid": "4420" 00:16:05.725 }, 00:16:05.725 "peer_address": { 00:16:05.725 "trtype": "TCP", 00:16:05.725 "adrfam": "IPv4", 00:16:05.725 "traddr": "10.0.0.1", 00:16:05.725 "trsvcid": "32896" 00:16:05.725 }, 00:16:05.725 "auth": { 00:16:05.725 "state": "completed", 00:16:05.725 "digest": "sha512", 00:16:05.725 "dhgroup": "ffdhe2048" 00:16:05.725 } 00:16:05.725 } 00:16:05.725 ]' 00:16:05.725 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.983 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.983 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.983 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.983 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.983 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.983 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.983 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.240 00:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.169 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.426 00:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.989 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.989 { 00:16:07.989 "cntlid": 113, 00:16:07.989 "qid": 0, 00:16:07.989 "state": "enabled", 00:16:07.989 "thread": "nvmf_tgt_poll_group_000", 00:16:07.989 "listen_address": { 00:16:07.989 "trtype": "TCP", 00:16:07.989 "adrfam": "IPv4", 00:16:07.989 "traddr": "10.0.0.2", 00:16:07.989 "trsvcid": "4420" 00:16:07.989 }, 00:16:07.989 "peer_address": { 00:16:07.989 "trtype": "TCP", 00:16:07.989 "adrfam": "IPv4", 00:16:07.989 "traddr": "10.0.0.1", 00:16:07.989 "trsvcid": "32914" 00:16:07.989 }, 00:16:07.989 "auth": { 00:16:07.989 "state": "completed", 00:16:07.989 "digest": "sha512", 00:16:07.989 "dhgroup": "ffdhe3072" 00:16:07.989 } 00:16:07.989 } 00:16:07.989 ]' 00:16:07.989 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.246 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.246 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.246 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.246 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.246 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.246 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.246 00:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.503 00:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:16:09.435 00:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.435 00:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.435 00:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.435 00:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.435 00:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.435 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.435 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.435 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.692 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:09.692 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.692 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.692 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:09.692 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.692 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.949 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.949 00:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.949 00:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.949 00:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.949 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.949 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.207 00:16:10.207 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.207 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.207 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.465 { 00:16:10.465 "cntlid": 115, 00:16:10.465 "qid": 0, 00:16:10.465 "state": "enabled", 00:16:10.465 "thread": "nvmf_tgt_poll_group_000", 00:16:10.465 "listen_address": { 00:16:10.465 "trtype": "TCP", 00:16:10.465 "adrfam": "IPv4", 00:16:10.465 "traddr": "10.0.0.2", 00:16:10.465 "trsvcid": "4420" 00:16:10.465 }, 00:16:10.465 "peer_address": { 00:16:10.465 "trtype": "TCP", 00:16:10.465 "adrfam": "IPv4", 00:16:10.465 "traddr": "10.0.0.1", 00:16:10.465 "trsvcid": "56980" 00:16:10.465 }, 00:16:10.465 "auth": { 00:16:10.465 "state": "completed", 00:16:10.465 "digest": "sha512", 00:16:10.465 "dhgroup": "ffdhe3072" 00:16:10.465 } 00:16:10.465 } 00:16:10.465 ]' 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.465 00:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.465 00:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.465 00:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.465 00:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.465 00:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.465 00:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.724 00:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:16:12.131 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.131 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.131 00:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.132 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.389 00:16:12.389 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.389 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.389 00:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.647 { 00:16:12.647 "cntlid": 117, 00:16:12.647 "qid": 0, 00:16:12.647 "state": "enabled", 00:16:12.647 "thread": "nvmf_tgt_poll_group_000", 00:16:12.647 "listen_address": { 00:16:12.647 "trtype": "TCP", 00:16:12.647 "adrfam": "IPv4", 00:16:12.647 "traddr": "10.0.0.2", 00:16:12.647 "trsvcid": "4420" 00:16:12.647 }, 00:16:12.647 "peer_address": { 00:16:12.647 "trtype": "TCP", 00:16:12.647 "adrfam": "IPv4", 00:16:12.647 "traddr": "10.0.0.1", 00:16:12.647 "trsvcid": "57010" 00:16:12.647 }, 00:16:12.647 "auth": { 00:16:12.647 "state": "completed", 00:16:12.647 "digest": "sha512", 00:16:12.647 "dhgroup": "ffdhe3072" 00:16:12.647 } 00:16:12.647 } 00:16:12.647 ]' 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.647 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.905 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.905 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.905 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.162 00:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.110 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.367 00:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.624 00:16:14.624 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.624 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.624 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.880 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.880 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.880 00:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.880 00:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.880 00:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.880 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.880 { 00:16:14.880 "cntlid": 119, 00:16:14.880 "qid": 0, 00:16:14.880 "state": "enabled", 00:16:14.880 "thread": "nvmf_tgt_poll_group_000", 00:16:14.880 "listen_address": { 00:16:14.880 "trtype": "TCP", 00:16:14.880 "adrfam": "IPv4", 00:16:14.880 "traddr": "10.0.0.2", 00:16:14.880 "trsvcid": "4420" 00:16:14.880 }, 00:16:14.880 "peer_address": { 00:16:14.880 "trtype": "TCP", 00:16:14.880 "adrfam": "IPv4", 00:16:14.880 "traddr": "10.0.0.1", 00:16:14.880 "trsvcid": "57036" 00:16:14.880 }, 00:16:14.880 "auth": { 00:16:14.880 "state": "completed", 00:16:14.880 "digest": "sha512", 00:16:14.880 "dhgroup": "ffdhe3072" 00:16:14.880 } 00:16:14.880 } 00:16:14.880 ]' 00:16:14.880 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.136 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.136 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.136 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.136 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.136 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.136 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.136 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.394 00:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.322 00:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.578 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.141 00:16:17.141 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.141 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.141 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.398 { 00:16:17.398 "cntlid": 121, 00:16:17.398 "qid": 0, 00:16:17.398 "state": "enabled", 00:16:17.398 "thread": "nvmf_tgt_poll_group_000", 00:16:17.398 "listen_address": { 00:16:17.398 "trtype": "TCP", 00:16:17.398 "adrfam": "IPv4", 00:16:17.398 "traddr": "10.0.0.2", 00:16:17.398 "trsvcid": "4420" 00:16:17.398 }, 00:16:17.398 "peer_address": { 00:16:17.398 "trtype": "TCP", 00:16:17.398 "adrfam": "IPv4", 00:16:17.398 "traddr": "10.0.0.1", 00:16:17.398 "trsvcid": "57058" 00:16:17.398 }, 00:16:17.398 "auth": { 00:16:17.398 "state": "completed", 00:16:17.398 "digest": "sha512", 00:16:17.398 "dhgroup": "ffdhe4096" 00:16:17.398 } 00:16:17.398 } 00:16:17.398 ]' 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.398 00:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.655 00:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.589 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.846 00:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.104 00:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.104 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.104 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.362 00:16:19.362 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.362 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.362 00:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.620 { 00:16:19.620 "cntlid": 123, 00:16:19.620 "qid": 0, 00:16:19.620 "state": "enabled", 00:16:19.620 "thread": "nvmf_tgt_poll_group_000", 00:16:19.620 "listen_address": { 00:16:19.620 "trtype": "TCP", 00:16:19.620 "adrfam": "IPv4", 00:16:19.620 "traddr": "10.0.0.2", 00:16:19.620 "trsvcid": "4420" 00:16:19.620 }, 00:16:19.620 "peer_address": { 00:16:19.620 "trtype": "TCP", 00:16:19.620 "adrfam": "IPv4", 00:16:19.620 "traddr": "10.0.0.1", 00:16:19.620 "trsvcid": "57088" 00:16:19.620 }, 00:16:19.620 "auth": { 00:16:19.620 "state": "completed", 00:16:19.620 "digest": "sha512", 00:16:19.620 "dhgroup": "ffdhe4096" 00:16:19.620 } 00:16:19.620 } 00:16:19.620 ]' 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.620 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.878 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.878 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.878 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.878 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.878 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.135 00:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.069 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.328 00:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.586 00:16:21.586 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.586 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.586 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.843 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.843 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.843 00:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.843 00:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.843 00:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.843 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.843 { 00:16:21.843 "cntlid": 125, 00:16:21.843 "qid": 0, 00:16:21.843 "state": "enabled", 00:16:21.843 "thread": "nvmf_tgt_poll_group_000", 00:16:21.843 "listen_address": { 00:16:21.843 "trtype": "TCP", 00:16:21.843 "adrfam": "IPv4", 00:16:21.843 "traddr": "10.0.0.2", 00:16:21.843 "trsvcid": "4420" 00:16:21.843 }, 00:16:21.843 "peer_address": { 00:16:21.843 "trtype": "TCP", 00:16:21.843 "adrfam": "IPv4", 00:16:21.843 "traddr": "10.0.0.1", 00:16:21.843 "trsvcid": "58464" 00:16:21.843 }, 00:16:21.843 "auth": { 00:16:21.843 "state": "completed", 00:16:21.843 "digest": "sha512", 00:16:21.843 "dhgroup": "ffdhe4096" 00:16:21.843 } 00:16:21.843 } 00:16:21.843 ]' 00:16:21.843 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.099 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.099 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.100 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.100 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.100 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.100 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.100 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.357 00:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.290 00:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.548 00:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.806 00:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.806 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.806 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.064 00:16:24.064 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.064 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.064 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.322 { 00:16:24.322 "cntlid": 127, 00:16:24.322 "qid": 0, 00:16:24.322 "state": "enabled", 00:16:24.322 "thread": "nvmf_tgt_poll_group_000", 00:16:24.322 "listen_address": { 00:16:24.322 "trtype": "TCP", 00:16:24.322 "adrfam": "IPv4", 00:16:24.322 "traddr": "10.0.0.2", 00:16:24.322 "trsvcid": "4420" 00:16:24.322 }, 00:16:24.322 "peer_address": { 00:16:24.322 "trtype": "TCP", 00:16:24.322 "adrfam": "IPv4", 00:16:24.322 "traddr": "10.0.0.1", 00:16:24.322 "trsvcid": "58500" 00:16:24.322 }, 00:16:24.322 "auth": { 00:16:24.322 "state": "completed", 00:16:24.322 "digest": "sha512", 00:16:24.322 "dhgroup": "ffdhe4096" 00:16:24.322 } 00:16:24.322 } 00:16:24.322 ]' 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.322 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.580 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.580 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.580 00:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.838 00:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.772 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.030 00:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.597 00:16:26.597 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.597 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.597 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.884 { 00:16:26.884 "cntlid": 129, 00:16:26.884 "qid": 0, 00:16:26.884 "state": "enabled", 00:16:26.884 "thread": "nvmf_tgt_poll_group_000", 00:16:26.884 "listen_address": { 00:16:26.884 "trtype": "TCP", 00:16:26.884 "adrfam": "IPv4", 00:16:26.884 "traddr": "10.0.0.2", 00:16:26.884 "trsvcid": "4420" 00:16:26.884 }, 00:16:26.884 "peer_address": { 00:16:26.884 "trtype": "TCP", 00:16:26.884 "adrfam": "IPv4", 00:16:26.884 "traddr": "10.0.0.1", 00:16:26.884 "trsvcid": "58524" 00:16:26.884 }, 00:16:26.884 "auth": { 00:16:26.884 "state": "completed", 00:16:26.884 "digest": "sha512", 00:16:26.884 "dhgroup": "ffdhe6144" 00:16:26.884 } 00:16:26.884 } 00:16:26.884 ]' 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.884 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.143 00:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.076 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.334 00:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.901 00:16:28.901 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.901 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.901 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.159 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.159 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.159 00:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.159 00:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.159 00:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.159 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.159 { 00:16:29.159 "cntlid": 131, 00:16:29.159 "qid": 0, 00:16:29.159 "state": "enabled", 00:16:29.159 "thread": "nvmf_tgt_poll_group_000", 00:16:29.159 "listen_address": { 00:16:29.159 "trtype": "TCP", 00:16:29.159 "adrfam": "IPv4", 00:16:29.159 "traddr": "10.0.0.2", 00:16:29.159 "trsvcid": "4420" 00:16:29.159 }, 00:16:29.159 "peer_address": { 00:16:29.159 "trtype": "TCP", 00:16:29.159 "adrfam": "IPv4", 00:16:29.159 "traddr": "10.0.0.1", 00:16:29.159 "trsvcid": "58540" 00:16:29.159 }, 00:16:29.159 "auth": { 00:16:29.159 "state": "completed", 00:16:29.159 "digest": "sha512", 00:16:29.159 "dhgroup": "ffdhe6144" 00:16:29.159 } 00:16:29.159 } 00:16:29.159 ]' 00:16:29.159 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.417 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.417 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.417 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.417 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.417 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.417 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.417 00:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.675 00:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.610 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.869 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.438 00:16:31.438 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.438 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.438 00:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.695 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.695 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.695 00:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.696 { 00:16:31.696 "cntlid": 133, 00:16:31.696 "qid": 0, 00:16:31.696 "state": "enabled", 00:16:31.696 "thread": "nvmf_tgt_poll_group_000", 00:16:31.696 "listen_address": { 00:16:31.696 "trtype": "TCP", 00:16:31.696 "adrfam": "IPv4", 00:16:31.696 "traddr": "10.0.0.2", 00:16:31.696 "trsvcid": "4420" 00:16:31.696 }, 00:16:31.696 "peer_address": { 00:16:31.696 "trtype": "TCP", 00:16:31.696 "adrfam": "IPv4", 00:16:31.696 "traddr": "10.0.0.1", 00:16:31.696 "trsvcid": "58766" 00:16:31.696 }, 00:16:31.696 "auth": { 00:16:31.696 "state": "completed", 00:16:31.696 "digest": "sha512", 00:16:31.696 "dhgroup": "ffdhe6144" 00:16:31.696 } 00:16:31.696 } 00:16:31.696 ]' 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.696 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.955 00:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.330 00:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.897 00:16:33.897 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.897 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.897 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.155 { 00:16:34.155 "cntlid": 135, 00:16:34.155 "qid": 0, 00:16:34.155 "state": "enabled", 00:16:34.155 "thread": "nvmf_tgt_poll_group_000", 00:16:34.155 "listen_address": { 00:16:34.155 "trtype": "TCP", 00:16:34.155 "adrfam": "IPv4", 00:16:34.155 "traddr": "10.0.0.2", 00:16:34.155 "trsvcid": "4420" 00:16:34.155 }, 00:16:34.155 "peer_address": { 00:16:34.155 "trtype": "TCP", 00:16:34.155 "adrfam": "IPv4", 00:16:34.155 "traddr": "10.0.0.1", 00:16:34.155 "trsvcid": "58796" 00:16:34.155 }, 00:16:34.155 "auth": { 00:16:34.155 "state": "completed", 00:16:34.155 "digest": "sha512", 00:16:34.155 "dhgroup": "ffdhe6144" 00:16:34.155 } 00:16:34.155 } 00:16:34.155 ]' 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.155 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.413 00:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.347 00:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.605 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.541 00:16:36.541 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.541 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.541 00:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.797 { 00:16:36.797 "cntlid": 137, 00:16:36.797 "qid": 0, 00:16:36.797 "state": "enabled", 00:16:36.797 "thread": "nvmf_tgt_poll_group_000", 00:16:36.797 "listen_address": { 00:16:36.797 "trtype": "TCP", 00:16:36.797 "adrfam": "IPv4", 00:16:36.797 "traddr": "10.0.0.2", 00:16:36.797 "trsvcid": "4420" 00:16:36.797 }, 00:16:36.797 "peer_address": { 00:16:36.797 "trtype": "TCP", 00:16:36.797 "adrfam": "IPv4", 00:16:36.797 "traddr": "10.0.0.1", 00:16:36.797 "trsvcid": "58818" 00:16:36.797 }, 00:16:36.797 "auth": { 00:16:36.797 "state": "completed", 00:16:36.797 "digest": "sha512", 00:16:36.797 "dhgroup": "ffdhe8192" 00:16:36.797 } 00:16:36.797 } 00:16:36.797 ]' 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.797 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.056 00:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.432 00:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.365 00:16:39.365 00:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.365 00:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.365 00:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.622 { 00:16:39.622 "cntlid": 139, 00:16:39.622 "qid": 0, 00:16:39.622 "state": "enabled", 00:16:39.622 "thread": "nvmf_tgt_poll_group_000", 00:16:39.622 "listen_address": { 00:16:39.622 "trtype": "TCP", 00:16:39.622 "adrfam": "IPv4", 00:16:39.622 "traddr": "10.0.0.2", 00:16:39.622 "trsvcid": "4420" 00:16:39.622 }, 00:16:39.622 "peer_address": { 00:16:39.622 "trtype": "TCP", 00:16:39.622 "adrfam": "IPv4", 00:16:39.622 "traddr": "10.0.0.1", 00:16:39.622 "trsvcid": "58850" 00:16:39.622 }, 00:16:39.622 "auth": { 00:16:39.622 "state": "completed", 00:16:39.622 "digest": "sha512", 00:16:39.622 "dhgroup": "ffdhe8192" 00:16:39.622 } 00:16:39.622 } 00:16:39.622 ]' 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.622 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.880 00:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGVjMTdhNGNhNWQ1OWNkYzYwN2E4OWE1ZGMzM2NhNjAMpPaj: --dhchap-ctrl-secret DHHC-1:02:Njk2NmUwYTkwYTJjNjg4NGUyNzNjNDkxNTdiMTBiMDg3NDQ5MzhjZGJhMjc3OTZiZqszYw==: 00:16:40.814 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.072 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.072 00:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.072 00:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.072 00:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.072 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.072 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.072 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.330 00:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.261 00:16:42.262 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.262 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.262 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.262 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.262 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.262 00:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.262 00:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.519 { 00:16:42.519 "cntlid": 141, 00:16:42.519 "qid": 0, 00:16:42.519 "state": "enabled", 00:16:42.519 "thread": "nvmf_tgt_poll_group_000", 00:16:42.519 "listen_address": { 00:16:42.519 "trtype": "TCP", 00:16:42.519 "adrfam": "IPv4", 00:16:42.519 "traddr": "10.0.0.2", 00:16:42.519 "trsvcid": "4420" 00:16:42.519 }, 00:16:42.519 "peer_address": { 00:16:42.519 "trtype": "TCP", 00:16:42.519 "adrfam": "IPv4", 00:16:42.519 "traddr": "10.0.0.1", 00:16:42.519 "trsvcid": "51572" 00:16:42.519 }, 00:16:42.519 "auth": { 00:16:42.519 "state": "completed", 00:16:42.519 "digest": "sha512", 00:16:42.519 "dhgroup": "ffdhe8192" 00:16:42.519 } 00:16:42.519 } 00:16:42.519 ]' 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.519 00:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.828 00:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWZlN2I1ZTMzNjI2Zjg5OTBjNDIxMzg1YTBmZDkwNTJjZjFlMDk2YjFlZTUyYTcy0m6aYw==: --dhchap-ctrl-secret DHHC-1:01:YzhhNWQ5MTA3ODViMDUxNTk5ZmNmZmU5OTI5MmYzOGY/0Qwq: 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.761 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.019 00:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.950 00:16:44.950 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.950 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.950 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.208 { 00:16:45.208 "cntlid": 143, 00:16:45.208 "qid": 0, 00:16:45.208 "state": "enabled", 00:16:45.208 "thread": "nvmf_tgt_poll_group_000", 00:16:45.208 "listen_address": { 00:16:45.208 "trtype": "TCP", 00:16:45.208 "adrfam": "IPv4", 00:16:45.208 "traddr": "10.0.0.2", 00:16:45.208 "trsvcid": "4420" 00:16:45.208 }, 00:16:45.208 "peer_address": { 00:16:45.208 "trtype": "TCP", 00:16:45.208 "adrfam": "IPv4", 00:16:45.208 "traddr": "10.0.0.1", 00:16:45.208 "trsvcid": "51614" 00:16:45.208 }, 00:16:45.208 "auth": { 00:16:45.208 "state": "completed", 00:16:45.208 "digest": "sha512", 00:16:45.208 "dhgroup": "ffdhe8192" 00:16:45.208 } 00:16:45.208 } 00:16:45.208 ]' 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.208 00:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.772 00:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.714 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.971 00:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.903 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.903 { 00:16:47.903 "cntlid": 145, 00:16:47.903 "qid": 0, 00:16:47.903 "state": "enabled", 00:16:47.903 "thread": "nvmf_tgt_poll_group_000", 00:16:47.903 "listen_address": { 00:16:47.903 "trtype": "TCP", 00:16:47.903 "adrfam": "IPv4", 00:16:47.903 "traddr": "10.0.0.2", 00:16:47.903 "trsvcid": "4420" 00:16:47.903 }, 00:16:47.903 "peer_address": { 00:16:47.903 "trtype": "TCP", 00:16:47.903 "adrfam": "IPv4", 00:16:47.903 "traddr": "10.0.0.1", 00:16:47.903 "trsvcid": "51644" 00:16:47.903 }, 00:16:47.903 "auth": { 00:16:47.903 "state": "completed", 00:16:47.903 "digest": "sha512", 00:16:47.903 "dhgroup": "ffdhe8192" 00:16:47.903 } 00:16:47.903 } 00:16:47.903 ]' 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.903 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.160 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.161 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.161 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.161 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.161 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.418 00:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDdkZjA1OTIxN2VlNmJjZGQ5NTM3MGEzYjQzNWVjZGEyMTBiYWE1ZGJiYTliYjQzkudnXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Y0YjMyZjg4MjY2ZTJiNWRmMWQ5ZWJmZDU3MWI2MmJlMDEwYWQ1NTA3OWU0NjRkYmQ2Mjg1NWZhZjc3OTRmZFi5rxg=: 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:49.348 00:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:50.279 request: 00:16:50.279 { 00:16:50.279 "name": "nvme0", 00:16:50.279 "trtype": "tcp", 00:16:50.279 "traddr": "10.0.0.2", 00:16:50.279 "adrfam": "ipv4", 00:16:50.279 "trsvcid": "4420", 00:16:50.279 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.279 "prchk_reftag": false, 00:16:50.279 "prchk_guard": false, 00:16:50.279 "hdgst": false, 00:16:50.279 "ddgst": false, 00:16:50.279 "dhchap_key": "key2", 00:16:50.279 "method": "bdev_nvme_attach_controller", 00:16:50.279 "req_id": 1 00:16:50.279 } 00:16:50.279 Got JSON-RPC error response 00:16:50.279 response: 00:16:50.279 { 00:16:50.279 "code": -5, 00:16:50.279 "message": "Input/output error" 00:16:50.279 } 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:50.279 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.280 00:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.280 00:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.844 request: 00:16:50.844 { 00:16:50.844 "name": "nvme0", 00:16:50.844 "trtype": "tcp", 00:16:50.844 "traddr": "10.0.0.2", 00:16:50.844 "adrfam": "ipv4", 00:16:50.844 "trsvcid": "4420", 00:16:50.844 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.844 "prchk_reftag": false, 00:16:50.844 "prchk_guard": false, 00:16:50.844 "hdgst": false, 00:16:50.844 "ddgst": false, 00:16:50.844 "dhchap_key": "key1", 00:16:50.844 "dhchap_ctrlr_key": "ckey2", 00:16:50.844 "method": "bdev_nvme_attach_controller", 00:16:50.844 "req_id": 1 00:16:50.844 } 00:16:50.844 Got JSON-RPC error response 00:16:50.844 response: 00:16:50.844 { 00:16:50.844 "code": -5, 00:16:50.844 "message": "Input/output error" 00:16:50.844 } 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.102 00:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.033 request: 00:16:52.033 { 00:16:52.033 "name": "nvme0", 00:16:52.033 "trtype": "tcp", 00:16:52.033 "traddr": "10.0.0.2", 00:16:52.033 "adrfam": "ipv4", 00:16:52.033 "trsvcid": "4420", 00:16:52.033 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:52.033 "prchk_reftag": false, 00:16:52.033 "prchk_guard": false, 00:16:52.033 "hdgst": false, 00:16:52.033 "ddgst": false, 00:16:52.034 "dhchap_key": "key1", 00:16:52.034 "dhchap_ctrlr_key": "ckey1", 00:16:52.034 "method": "bdev_nvme_attach_controller", 00:16:52.034 "req_id": 1 00:16:52.034 } 00:16:52.034 Got JSON-RPC error response 00:16:52.034 response: 00:16:52.034 { 00:16:52.034 "code": -5, 00:16:52.034 "message": "Input/output error" 00:16:52.034 } 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2274637 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2274637 ']' 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2274637 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2274637 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2274637' 00:16:52.034 killing process with pid 2274637 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2274637 00:16:52.034 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2274637 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2297457 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2297457 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2297457 ']' 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.291 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2297457 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2297457 ']' 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.549 00:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.807 00:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.738 00:16:53.738 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.738 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.738 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.995 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.995 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.995 00:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.995 00:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.995 00:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.995 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.995 { 00:16:53.995 "cntlid": 1, 00:16:53.995 "qid": 0, 00:16:53.995 "state": "enabled", 00:16:53.995 "thread": "nvmf_tgt_poll_group_000", 00:16:53.995 "listen_address": { 00:16:53.995 "trtype": "TCP", 00:16:53.995 "adrfam": "IPv4", 00:16:53.995 "traddr": "10.0.0.2", 00:16:53.995 "trsvcid": "4420" 00:16:53.996 }, 00:16:53.996 "peer_address": { 00:16:53.996 "trtype": "TCP", 00:16:53.996 "adrfam": "IPv4", 00:16:53.996 "traddr": "10.0.0.1", 00:16:53.996 "trsvcid": "41152" 00:16:53.996 }, 00:16:53.996 "auth": { 00:16:53.996 "state": "completed", 00:16:53.996 "digest": "sha512", 00:16:53.996 "dhgroup": "ffdhe8192" 00:16:53.996 } 00:16:53.996 } 00:16:53.996 ]' 00:16:53.996 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.996 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.996 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.253 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.253 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.253 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.253 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.253 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.510 00:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTlhYzVhOGFkMTc2ZDEyODI3YjNkY2FjNTU0Y2MyYzMwZmIwNDM5NzgwZWZiMTE3ZGQwNmUzZGVhMDViNzJiNxML2eY=: 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:55.442 00:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.699 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.955 request: 00:16:55.955 { 00:16:55.955 "name": "nvme0", 00:16:55.955 "trtype": "tcp", 00:16:55.955 "traddr": "10.0.0.2", 00:16:55.955 "adrfam": "ipv4", 00:16:55.955 "trsvcid": "4420", 00:16:55.955 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.955 "prchk_reftag": false, 00:16:55.955 "prchk_guard": false, 00:16:55.955 "hdgst": false, 00:16:55.955 "ddgst": false, 00:16:55.955 "dhchap_key": "key3", 00:16:55.955 "method": "bdev_nvme_attach_controller", 00:16:55.955 "req_id": 1 00:16:55.955 } 00:16:55.955 Got JSON-RPC error response 00:16:55.955 response: 00:16:55.955 { 00:16:55.955 "code": -5, 00:16:55.955 "message": "Input/output error" 00:16:55.955 } 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:55.955 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:56.211 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.211 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:56.211 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.211 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:56.212 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.212 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:56.212 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.212 00:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.212 00:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.468 request: 00:16:56.468 { 00:16:56.468 "name": "nvme0", 00:16:56.468 "trtype": "tcp", 00:16:56.468 "traddr": "10.0.0.2", 00:16:56.468 "adrfam": "ipv4", 00:16:56.468 "trsvcid": "4420", 00:16:56.468 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:56.468 "prchk_reftag": false, 00:16:56.468 "prchk_guard": false, 00:16:56.468 "hdgst": false, 00:16:56.468 "ddgst": false, 00:16:56.468 "dhchap_key": "key3", 00:16:56.468 "method": "bdev_nvme_attach_controller", 00:16:56.468 "req_id": 1 00:16:56.468 } 00:16:56.468 Got JSON-RPC error response 00:16:56.468 response: 00:16:56.468 { 00:16:56.468 "code": -5, 00:16:56.468 "message": "Input/output error" 00:16:56.468 } 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.468 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.726 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.726 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.726 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.726 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.726 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.726 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.726 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.983 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:57.240 request: 00:16:57.240 { 00:16:57.240 "name": "nvme0", 00:16:57.240 "trtype": "tcp", 00:16:57.240 "traddr": "10.0.0.2", 00:16:57.240 "adrfam": "ipv4", 00:16:57.240 "trsvcid": "4420", 00:16:57.240 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:57.240 "prchk_reftag": false, 00:16:57.240 "prchk_guard": false, 00:16:57.240 "hdgst": false, 00:16:57.240 "ddgst": false, 00:16:57.240 "dhchap_key": "key0", 00:16:57.240 "dhchap_ctrlr_key": "key1", 00:16:57.240 "method": "bdev_nvme_attach_controller", 00:16:57.240 "req_id": 1 00:16:57.240 } 00:16:57.240 Got JSON-RPC error response 00:16:57.240 response: 00:16:57.240 { 00:16:57.240 "code": -5, 00:16:57.240 "message": "Input/output error" 00:16:57.240 } 00:16:57.240 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:57.240 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.240 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:57.240 00:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.240 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:57.240 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:57.496 00:16:57.496 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:57.496 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.496 00:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:57.752 00:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.752 00:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.752 00:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2274792 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2274792 ']' 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2274792 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2274792 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2274792' 00:16:58.019 killing process with pid 2274792 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2274792 00:16:58.019 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2274792 00:16:58.293 00:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:58.293 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.293 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:58.293 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.293 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:58.293 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.293 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.293 rmmod nvme_tcp 00:16:58.550 rmmod nvme_fabrics 00:16:58.550 rmmod nvme_keyring 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2297457 ']' 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2297457 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2297457 ']' 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2297457 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2297457 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2297457' 00:16:58.550 killing process with pid 2297457 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2297457 00:16:58.550 00:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2297457 00:16:58.807 00:20:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.807 00:20:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.807 00:20:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.807 00:20:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.807 00:20:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.808 00:20:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.808 00:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.808 00:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.707 00:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.707 00:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.PWA /tmp/spdk.key-sha256.uT4 /tmp/spdk.key-sha384.8c0 /tmp/spdk.key-sha512.Dzr /tmp/spdk.key-sha512.dIQ /tmp/spdk.key-sha384.5e5 /tmp/spdk.key-sha256.fDd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:00.707 00:17:00.707 real 3m11.338s 00:17:00.707 user 7m25.203s 00:17:00.707 sys 0m25.269s 00:17:00.708 00:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.708 00:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.708 ************************************ 00:17:00.708 END TEST nvmf_auth_target 00:17:00.708 ************************************ 00:17:00.708 00:20:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:00.708 00:20:52 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:00.708 00:20:52 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:00.708 00:20:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:00.708 00:20:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.708 00:20:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.966 ************************************ 00:17:00.966 START TEST nvmf_bdevio_no_huge 00:17:00.966 ************************************ 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:00.966 * Looking for test storage... 00:17:00.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.966 00:20:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.866 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.867 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:03.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:17:03.126 00:17:03.126 --- 10.0.0.2 ping statistics --- 00:17:03.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.126 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:17:03.126 00:17:03.126 --- 10.0.0.1 ping statistics --- 00:17:03.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.126 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2300218 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2300218 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2300218 ']' 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.126 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.127 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.127 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.127 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.127 [2024-07-16 00:20:54.611721] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:03.127 [2024-07-16 00:20:54.611806] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:03.127 [2024-07-16 00:20:54.684089] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.385 [2024-07-16 00:20:54.792146] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.385 [2024-07-16 00:20:54.792207] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.385 [2024-07-16 00:20:54.792220] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.385 [2024-07-16 00:20:54.792230] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.385 [2024-07-16 00:20:54.792240] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.385 [2024-07-16 00:20:54.792354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.385 [2024-07-16 00:20:54.792420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:03.385 [2024-07-16 00:20:54.792467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:03.385 [2024-07-16 00:20:54.792469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 [2024-07-16 00:20:54.915346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 Malloc0 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 [2024-07-16 00:20:54.952819] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:03.385 { 00:17:03.385 "params": { 00:17:03.385 "name": "Nvme$subsystem", 00:17:03.385 "trtype": "$TEST_TRANSPORT", 00:17:03.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.385 "adrfam": "ipv4", 00:17:03.385 "trsvcid": "$NVMF_PORT", 00:17:03.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.385 "hdgst": ${hdgst:-false}, 00:17:03.385 "ddgst": ${ddgst:-false} 00:17:03.385 }, 00:17:03.385 "method": "bdev_nvme_attach_controller" 00:17:03.385 } 00:17:03.385 EOF 00:17:03.385 )") 00:17:03.385 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:03.386 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:03.386 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:03.386 00:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:03.386 "params": { 00:17:03.386 "name": "Nvme1", 00:17:03.386 "trtype": "tcp", 00:17:03.386 "traddr": "10.0.0.2", 00:17:03.386 "adrfam": "ipv4", 00:17:03.386 "trsvcid": "4420", 00:17:03.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.386 "hdgst": false, 00:17:03.386 "ddgst": false 00:17:03.386 }, 00:17:03.386 "method": "bdev_nvme_attach_controller" 00:17:03.386 }' 00:17:03.386 [2024-07-16 00:20:54.998185] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:03.386 [2024-07-16 00:20:54.998272] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2300252 ] 00:17:03.643 [2024-07-16 00:20:55.061090] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:03.643 [2024-07-16 00:20:55.177535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.643 [2024-07-16 00:20:55.177597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.643 [2024-07-16 00:20:55.177601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.900 I/O targets: 00:17:03.900 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:03.900 00:17:03.900 00:17:03.900 CUnit - A unit testing framework for C - Version 2.1-3 00:17:03.900 http://cunit.sourceforge.net/ 00:17:03.900 00:17:03.900 00:17:03.900 Suite: bdevio tests on: Nvme1n1 00:17:04.160 Test: blockdev write read block ...passed 00:17:04.160 Test: blockdev write zeroes read block ...passed 00:17:04.160 Test: blockdev write zeroes read no split ...passed 00:17:04.160 Test: blockdev write zeroes read split ...passed 00:17:04.160 Test: blockdev write zeroes read split partial ...passed 00:17:04.160 Test: blockdev reset ...[2024-07-16 00:20:55.717403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:04.160 [2024-07-16 00:20:55.717523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2349100 (9): Bad file descriptor 00:17:04.420 [2024-07-16 00:20:55.778477] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:04.420 passed 00:17:04.420 Test: blockdev write read 8 blocks ...passed 00:17:04.420 Test: blockdev write read size > 128k ...passed 00:17:04.420 Test: blockdev write read invalid size ...passed 00:17:04.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:04.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:04.420 Test: blockdev write read max offset ...passed 00:17:04.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:04.420 Test: blockdev writev readv 8 blocks ...passed 00:17:04.420 Test: blockdev writev readv 30 x 1block ...passed 00:17:04.679 Test: blockdev writev readv block ...passed 00:17:04.679 Test: blockdev writev readv size > 128k ...passed 00:17:04.679 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:04.679 Test: blockdev comparev and writev ...[2024-07-16 00:20:56.041030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.041065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.041089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.041106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.041496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.041521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.041544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.041560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.041948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.041972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.041993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.042010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.042384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.042407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.042428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.679 [2024-07-16 00:20:56.042451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:04.679 passed 00:17:04.679 Test: blockdev nvme passthru rw ...passed 00:17:04.679 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:20:56.126288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.679 [2024-07-16 00:20:56.126316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.126602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.679 [2024-07-16 00:20:56.126626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.126861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.679 [2024-07-16 00:20:56.126893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:04.679 [2024-07-16 00:20:56.127172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.679 [2024-07-16 00:20:56.127196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:04.679 passed 00:17:04.679 Test: blockdev nvme admin passthru ...passed 00:17:04.679 Test: blockdev copy ...passed 00:17:04.679 00:17:04.679 Run Summary: Type Total Ran Passed Failed Inactive 00:17:04.679 suites 1 1 n/a 0 0 00:17:04.679 tests 23 23 23 0 0 00:17:04.679 asserts 152 152 152 0 n/a 00:17:04.679 00:17:04.679 Elapsed time = 1.365 seconds 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.244 rmmod nvme_tcp 00:17:05.244 rmmod nvme_fabrics 00:17:05.244 rmmod nvme_keyring 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2300218 ']' 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2300218 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2300218 ']' 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2300218 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2300218 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2300218' 00:17:05.244 killing process with pid 2300218 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2300218 00:17:05.244 00:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2300218 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.504 00:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.041 00:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.041 00:17:08.041 real 0m6.745s 00:17:08.041 user 0m12.038s 00:17:08.041 sys 0m2.501s 00:17:08.041 00:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.041 00:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.041 ************************************ 00:17:08.041 END TEST nvmf_bdevio_no_huge 00:17:08.041 ************************************ 00:17:08.041 00:20:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:08.041 00:20:59 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:08.041 00:20:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:08.041 00:20:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.041 00:20:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.041 ************************************ 00:17:08.041 START TEST nvmf_tls 00:17:08.041 ************************************ 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:08.041 * Looking for test storage... 00:17:08.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.041 00:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.042 00:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.940 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:09.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:09.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:09.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:09.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:09.941 00:17:09.941 --- 10.0.0.2 ping statistics --- 00:17:09.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.941 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:17:09.941 00:17:09.941 --- 10.0.0.1 ping statistics --- 00:17:09.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.941 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2302509 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2302509 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2302509 ']' 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.941 [2024-07-16 00:21:01.276222] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:09.941 [2024-07-16 00:21:01.276303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.941 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.941 [2024-07-16 00:21:01.344559] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.941 [2024-07-16 00:21:01.458871] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.941 [2024-07-16 00:21:01.458946] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.941 [2024-07-16 00:21:01.458961] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.941 [2024-07-16 00:21:01.458972] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.941 [2024-07-16 00:21:01.458983] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.941 [2024-07-16 00:21:01.459026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:09.941 00:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:10.199 true 00:17:10.199 00:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:10.199 00:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:10.458 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:10.458 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:10.458 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:10.716 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:10.716 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:10.974 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:10.974 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:10.974 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:11.233 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.233 00:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:11.490 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:11.490 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:11.490 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.490 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:11.746 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:11.746 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:11.746 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:12.002 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.002 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:12.260 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:12.260 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:12.260 00:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:12.517 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.517 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:12.775 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:12.775 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:12.776 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.sp60pSuG6w 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6ckTY2OL7W 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.sp60pSuG6w 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6ckTY2OL7W 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:13.035 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:13.608 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.sp60pSuG6w 00:17:13.608 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sp60pSuG6w 00:17:13.608 00:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:13.866 [2024-07-16 00:21:05.231218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.866 00:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:14.125 00:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:14.384 [2024-07-16 00:21:05.780715] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.384 [2024-07-16 00:21:05.780995] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.384 00:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:14.646 malloc0 00:17:14.646 00:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:14.928 00:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sp60pSuG6w 00:17:15.195 [2024-07-16 00:21:06.562693] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:15.195 00:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sp60pSuG6w 00:17:15.195 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.176 Initializing NVMe Controllers 00:17:25.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:25.176 Initialization complete. Launching workers. 00:17:25.176 ======================================================== 00:17:25.176 Latency(us) 00:17:25.176 Device Information : IOPS MiB/s Average min max 00:17:25.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7637.58 29.83 8382.43 1189.49 10869.76 00:17:25.176 ======================================================== 00:17:25.176 Total : 7637.58 29.83 8382.43 1189.49 10869.76 00:17:25.176 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sp60pSuG6w 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sp60pSuG6w' 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2304834 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2304834 /var/tmp/bdevperf.sock 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2304834 ']' 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.176 00:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.176 [2024-07-16 00:21:16.728678] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:25.176 [2024-07-16 00:21:16.728768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304834 ] 00:17:25.176 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.435 [2024-07-16 00:21:16.792220] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.435 [2024-07-16 00:21:16.902268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.435 00:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.435 00:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:25.435 00:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sp60pSuG6w 00:17:25.695 [2024-07-16 00:21:17.289873] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:25.695 [2024-07-16 00:21:17.290021] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:25.955 TLSTESTn1 00:17:25.955 00:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:25.955 Running I/O for 10 seconds... 00:17:38.165 00:17:38.165 Latency(us) 00:17:38.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.165 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:38.165 Verification LBA range: start 0x0 length 0x2000 00:17:38.165 TLSTESTn1 : 10.09 1218.73 4.76 0.00 0.00 104656.66 6602.15 149130.81 00:17:38.165 =================================================================================================================== 00:17:38.165 Total : 1218.73 4.76 0.00 0.00 104656.66 6602.15 149130.81 00:17:38.165 0 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2304834 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2304834 ']' 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2304834 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2304834 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2304834' 00:17:38.165 killing process with pid 2304834 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2304834 00:17:38.165 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.165 00:17:38.165 Latency(us) 00:17:38.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.165 =================================================================================================================== 00:17:38.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.165 [2024-07-16 00:21:27.658047] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2304834 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ckTY2OL7W 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ckTY2OL7W 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ckTY2OL7W 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6ckTY2OL7W' 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2306142 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2306142 /var/tmp/bdevperf.sock 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2306142 ']' 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.165 00:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.165 [2024-07-16 00:21:27.972841] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:38.165 [2024-07-16 00:21:27.972938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306142 ] 00:17:38.165 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.165 [2024-07-16 00:21:28.032220] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.165 [2024-07-16 00:21:28.137628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6ckTY2OL7W 00:17:38.165 [2024-07-16 00:21:28.486644] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.165 [2024-07-16 00:21:28.486772] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.165 [2024-07-16 00:21:28.493766] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:38.165 [2024-07-16 00:21:28.494023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6150 (107): Transport endpoint is not connected 00:17:38.165 [2024-07-16 00:21:28.495012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6150 (9): Bad file descriptor 00:17:38.165 [2024-07-16 00:21:28.496011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:38.165 [2024-07-16 00:21:28.496033] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:38.165 [2024-07-16 00:21:28.496046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.165 request: 00:17:38.165 { 00:17:38.165 "name": "TLSTEST", 00:17:38.165 "trtype": "tcp", 00:17:38.165 "traddr": "10.0.0.2", 00:17:38.165 "adrfam": "ipv4", 00:17:38.165 "trsvcid": "4420", 00:17:38.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.165 "prchk_reftag": false, 00:17:38.165 "prchk_guard": false, 00:17:38.165 "hdgst": false, 00:17:38.165 "ddgst": false, 00:17:38.165 "psk": "/tmp/tmp.6ckTY2OL7W", 00:17:38.165 "method": "bdev_nvme_attach_controller", 00:17:38.165 "req_id": 1 00:17:38.165 } 00:17:38.165 Got JSON-RPC error response 00:17:38.165 response: 00:17:38.165 { 00:17:38.165 "code": -5, 00:17:38.165 "message": "Input/output error" 00:17:38.165 } 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2306142 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2306142 ']' 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2306142 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2306142 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2306142' 00:17:38.165 killing process with pid 2306142 00:17:38.165 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2306142 00:17:38.165 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.165 00:17:38.165 Latency(us) 00:17:38.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.165 =================================================================================================================== 00:17:38.165 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.166 [2024-07-16 00:21:28.546839] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2306142 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sp60pSuG6w 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sp60pSuG6w 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sp60pSuG6w 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sp60pSuG6w' 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2306283 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2306283 /var/tmp/bdevperf.sock 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2306283 ']' 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.166 00:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.166 [2024-07-16 00:21:28.820722] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:38.166 [2024-07-16 00:21:28.820813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306283 ] 00:17:38.166 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.166 [2024-07-16 00:21:28.879160] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.166 [2024-07-16 00:21:28.985909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.sp60pSuG6w 00:17:38.166 [2024-07-16 00:21:29.321510] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.166 [2024-07-16 00:21:29.321645] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.166 [2024-07-16 00:21:29.327264] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:38.166 [2024-07-16 00:21:29.327296] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:38.166 [2024-07-16 00:21:29.327344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:38.166 [2024-07-16 00:21:29.327883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230f150 (107): Transport endpoint is not connected 00:17:38.166 [2024-07-16 00:21:29.328874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230f150 (9): Bad file descriptor 00:17:38.166 [2024-07-16 00:21:29.329872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:38.166 [2024-07-16 00:21:29.329914] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:38.166 [2024-07-16 00:21:29.329928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.166 request: 00:17:38.166 { 00:17:38.166 "name": "TLSTEST", 00:17:38.166 "trtype": "tcp", 00:17:38.166 "traddr": "10.0.0.2", 00:17:38.166 "adrfam": "ipv4", 00:17:38.166 "trsvcid": "4420", 00:17:38.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.166 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:38.166 "prchk_reftag": false, 00:17:38.166 "prchk_guard": false, 00:17:38.166 "hdgst": false, 00:17:38.166 "ddgst": false, 00:17:38.166 "psk": "/tmp/tmp.sp60pSuG6w", 00:17:38.166 "method": "bdev_nvme_attach_controller", 00:17:38.166 "req_id": 1 00:17:38.166 } 00:17:38.166 Got JSON-RPC error response 00:17:38.166 response: 00:17:38.166 { 00:17:38.166 "code": -5, 00:17:38.166 "message": "Input/output error" 00:17:38.166 } 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2306283 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2306283 ']' 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2306283 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2306283 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2306283' 00:17:38.166 killing process with pid 2306283 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2306283 00:17:38.166 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.166 00:17:38.166 Latency(us) 00:17:38.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.166 =================================================================================================================== 00:17:38.166 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.166 [2024-07-16 00:21:29.382083] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2306283 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sp60pSuG6w 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sp60pSuG6w 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sp60pSuG6w 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sp60pSuG6w' 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2306420 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2306420 /var/tmp/bdevperf.sock 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2306420 ']' 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.166 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.166 [2024-07-16 00:21:29.679318] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:38.166 [2024-07-16 00:21:29.679411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306420 ] 00:17:38.166 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.166 [2024-07-16 00:21:29.736638] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.425 [2024-07-16 00:21:29.839464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.425 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.425 00:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:38.425 00:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sp60pSuG6w 00:17:38.683 [2024-07-16 00:21:30.216071] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.683 [2024-07-16 00:21:30.216213] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.683 [2024-07-16 00:21:30.228087] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:38.683 [2024-07-16 00:21:30.228116] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:38.683 [2024-07-16 00:21:30.228161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:38.683 [2024-07-16 00:21:30.228503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b98150 (107): Transport endpoint is not connected 00:17:38.683 [2024-07-16 00:21:30.229493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b98150 (9): Bad file descriptor 00:17:38.683 [2024-07-16 00:21:30.230492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:38.683 [2024-07-16 00:21:30.230511] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:38.683 [2024-07-16 00:21:30.230533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:38.683 request: 00:17:38.683 { 00:17:38.683 "name": "TLSTEST", 00:17:38.683 "trtype": "tcp", 00:17:38.683 "traddr": "10.0.0.2", 00:17:38.683 "adrfam": "ipv4", 00:17:38.683 "trsvcid": "4420", 00:17:38.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:38.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.683 "prchk_reftag": false, 00:17:38.683 "prchk_guard": false, 00:17:38.683 "hdgst": false, 00:17:38.683 "ddgst": false, 00:17:38.683 "psk": "/tmp/tmp.sp60pSuG6w", 00:17:38.683 "method": "bdev_nvme_attach_controller", 00:17:38.683 "req_id": 1 00:17:38.683 } 00:17:38.683 Got JSON-RPC error response 00:17:38.683 response: 00:17:38.683 { 00:17:38.683 "code": -5, 00:17:38.683 "message": "Input/output error" 00:17:38.683 } 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2306420 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2306420 ']' 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2306420 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2306420 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2306420' 00:17:38.683 killing process with pid 2306420 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2306420 00:17:38.683 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.683 00:17:38.683 Latency(us) 00:17:38.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.683 =================================================================================================================== 00:17:38.683 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.683 [2024-07-16 00:21:30.273502] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.683 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2306420 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2306554 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2306554 /var/tmp/bdevperf.sock 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2306554 ']' 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.941 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.941 [2024-07-16 00:21:30.553344] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:38.941 [2024-07-16 00:21:30.553433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306554 ] 00:17:39.201 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.201 [2024-07-16 00:21:30.612702] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.201 [2024-07-16 00:21:30.720337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.460 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.460 00:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:39.460 00:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:39.460 [2024-07-16 00:21:31.060374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:39.460 [2024-07-16 00:21:31.061806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176b910 (9): Bad file descriptor 00:17:39.460 [2024-07-16 00:21:31.062804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:39.460 [2024-07-16 00:21:31.062825] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:39.460 [2024-07-16 00:21:31.062838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:39.460 request: 00:17:39.460 { 00:17:39.460 "name": "TLSTEST", 00:17:39.460 "trtype": "tcp", 00:17:39.460 "traddr": "10.0.0.2", 00:17:39.460 "adrfam": "ipv4", 00:17:39.460 "trsvcid": "4420", 00:17:39.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.460 "prchk_reftag": false, 00:17:39.460 "prchk_guard": false, 00:17:39.460 "hdgst": false, 00:17:39.460 "ddgst": false, 00:17:39.460 "method": "bdev_nvme_attach_controller", 00:17:39.460 "req_id": 1 00:17:39.460 } 00:17:39.460 Got JSON-RPC error response 00:17:39.461 response: 00:17:39.461 { 00:17:39.461 "code": -5, 00:17:39.461 "message": "Input/output error" 00:17:39.461 } 00:17:39.719 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2306554 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2306554 ']' 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2306554 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2306554 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2306554' 00:17:39.720 killing process with pid 2306554 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2306554 00:17:39.720 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.720 00:17:39.720 Latency(us) 00:17:39.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.720 =================================================================================================================== 00:17:39.720 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.720 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2306554 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2302509 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2302509 ']' 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2302509 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2302509 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2302509' 00:17:39.979 killing process with pid 2302509 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2302509 00:17:39.979 [2024-07-16 00:21:31.400997] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:39.979 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2302509 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ANJewCPW44 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ANJewCPW44 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2306704 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2306704 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2306704 ']' 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.238 00:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.238 [2024-07-16 00:21:31.796347] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:40.238 [2024-07-16 00:21:31.796440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.238 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.496 [2024-07-16 00:21:31.867601] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.496 [2024-07-16 00:21:31.983652] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.496 [2024-07-16 00:21:31.983717] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.496 [2024-07-16 00:21:31.983743] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.496 [2024-07-16 00:21:31.983757] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.496 [2024-07-16 00:21:31.983769] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.496 [2024-07-16 00:21:31.983805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ANJewCPW44 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ANJewCPW44 00:17:41.429 00:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:41.429 [2024-07-16 00:21:32.992692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.429 00:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.998 00:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:41.998 [2024-07-16 00:21:33.538167] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.998 [2024-07-16 00:21:33.538430] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.998 00:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:42.255 malloc0 00:17:42.255 00:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.820 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANJewCPW44 00:17:42.820 [2024-07-16 00:21:34.431620] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ANJewCPW44 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ANJewCPW44' 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2307004 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2307004 /var/tmp/bdevperf.sock 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2307004 ']' 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.079 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.079 [2024-07-16 00:21:34.495961] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:43.080 [2024-07-16 00:21:34.496045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307004 ] 00:17:43.080 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.080 [2024-07-16 00:21:34.553086] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.080 [2024-07-16 00:21:34.657537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.337 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.337 00:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:43.337 00:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANJewCPW44 00:17:43.595 [2024-07-16 00:21:35.043678] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.595 [2024-07-16 00:21:35.043798] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:43.595 TLSTESTn1 00:17:43.595 00:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:43.852 Running I/O for 10 seconds... 00:17:53.894 00:17:53.895 Latency(us) 00:17:53.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.895 Verification LBA range: start 0x0 length 0x2000 00:17:53.895 TLSTESTn1 : 10.07 1882.43 7.35 0.00 0.00 67798.74 5898.24 100973.99 00:17:53.895 =================================================================================================================== 00:17:53.895 Total : 1882.43 7.35 0.00 0.00 67798.74 5898.24 100973.99 00:17:53.895 0 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2307004 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2307004 ']' 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2307004 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2307004 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2307004' 00:17:53.895 killing process with pid 2307004 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2307004 00:17:53.895 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.895 00:17:53.895 Latency(us) 00:17:53.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.895 =================================================================================================================== 00:17:53.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.895 [2024-07-16 00:21:45.370538] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.895 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2307004 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ANJewCPW44 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ANJewCPW44 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ANJewCPW44 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ANJewCPW44 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ANJewCPW44' 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2308320 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2308320 /var/tmp/bdevperf.sock 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2308320 ']' 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.154 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.154 [2024-07-16 00:21:45.690031] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:54.154 [2024-07-16 00:21:45.690124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308320 ] 00:17:54.154 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.154 [2024-07-16 00:21:45.752554] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.414 [2024-07-16 00:21:45.861226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.414 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.414 00:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.414 00:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANJewCPW44 00:17:54.672 [2024-07-16 00:21:46.251933] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.672 [2024-07-16 00:21:46.252006] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:54.672 [2024-07-16 00:21:46.252020] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ANJewCPW44 00:17:54.672 request: 00:17:54.672 { 00:17:54.672 "name": "TLSTEST", 00:17:54.672 "trtype": "tcp", 00:17:54.672 "traddr": "10.0.0.2", 00:17:54.672 "adrfam": "ipv4", 00:17:54.672 "trsvcid": "4420", 00:17:54.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.672 "prchk_reftag": false, 00:17:54.672 "prchk_guard": false, 00:17:54.672 "hdgst": false, 00:17:54.672 "ddgst": false, 00:17:54.672 "psk": "/tmp/tmp.ANJewCPW44", 00:17:54.672 "method": "bdev_nvme_attach_controller", 00:17:54.672 "req_id": 1 00:17:54.672 } 00:17:54.672 Got JSON-RPC error response 00:17:54.672 response: 00:17:54.672 { 00:17:54.672 "code": -1, 00:17:54.672 "message": "Operation not permitted" 00:17:54.672 } 00:17:54.672 00:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2308320 00:17:54.672 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2308320 ']' 00:17:54.672 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2308320 00:17:54.672 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.672 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.672 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2308320 00:17:54.932 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:54.932 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:54.932 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2308320' 00:17:54.932 killing process with pid 2308320 00:17:54.932 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2308320 00:17:54.932 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.932 00:17:54.932 Latency(us) 00:17:54.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.932 =================================================================================================================== 00:17:54.932 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.932 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2308320 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2306704 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2306704 ']' 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2306704 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2306704 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2306704' 00:17:55.192 killing process with pid 2306704 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2306704 00:17:55.192 [2024-07-16 00:21:46.591734] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:55.192 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2306704 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2308465 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2308465 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2308465 ']' 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.451 00:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.451 [2024-07-16 00:21:46.944474] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:55.451 [2024-07-16 00:21:46.944568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.451 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.451 [2024-07-16 00:21:47.014686] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.709 [2024-07-16 00:21:47.128676] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.709 [2024-07-16 00:21:47.128735] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.709 [2024-07-16 00:21:47.128748] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.709 [2024-07-16 00:21:47.128759] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.709 [2024-07-16 00:21:47.128769] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.709 [2024-07-16 00:21:47.128797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ANJewCPW44 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ANJewCPW44 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.709 00:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ANJewCPW44 00:17:55.710 00:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ANJewCPW44 00:17:55.710 00:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:55.968 [2024-07-16 00:21:47.551400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.968 00:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:56.536 00:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:56.536 [2024-07-16 00:21:48.100844] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.536 [2024-07-16 00:21:48.101113] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.536 00:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:56.794 malloc0 00:17:56.794 00:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANJewCPW44 00:17:57.363 [2024-07-16 00:21:48.919239] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:57.363 [2024-07-16 00:21:48.919281] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:57.363 [2024-07-16 00:21:48.919333] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:57.363 request: 00:17:57.363 { 00:17:57.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.363 "host": "nqn.2016-06.io.spdk:host1", 00:17:57.363 "psk": "/tmp/tmp.ANJewCPW44", 00:17:57.363 "method": "nvmf_subsystem_add_host", 00:17:57.363 "req_id": 1 00:17:57.363 } 00:17:57.363 Got JSON-RPC error response 00:17:57.363 response: 00:17:57.363 { 00:17:57.363 "code": -32603, 00:17:57.363 "message": "Internal error" 00:17:57.363 } 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2308465 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2308465 ']' 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2308465 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2308465 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2308465' 00:17:57.363 killing process with pid 2308465 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2308465 00:17:57.363 00:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2308465 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ANJewCPW44 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2308766 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2308766 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2308766 ']' 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.930 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.931 [2024-07-16 00:21:49.288983] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:57.931 [2024-07-16 00:21:49.289075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.931 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.931 [2024-07-16 00:21:49.352470] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.931 [2024-07-16 00:21:49.459632] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.931 [2024-07-16 00:21:49.459709] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.931 [2024-07-16 00:21:49.459722] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.931 [2024-07-16 00:21:49.459733] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.931 [2024-07-16 00:21:49.459750] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.931 [2024-07-16 00:21:49.459784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ANJewCPW44 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ANJewCPW44 00:17:58.191 00:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:58.450 [2024-07-16 00:21:49.843328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.450 00:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:58.708 00:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:58.966 [2024-07-16 00:21:50.396829] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:58.966 [2024-07-16 00:21:50.397107] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.966 00:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:59.224 malloc0 00:17:59.224 00:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:59.481 00:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANJewCPW44 00:17:59.740 [2024-07-16 00:21:51.125973] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2309040 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2309040 /var/tmp/bdevperf.sock 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2309040 ']' 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.740 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.740 [2024-07-16 00:21:51.186827] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:17:59.740 [2024-07-16 00:21:51.186916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309040 ] 00:17:59.740 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.740 [2024-07-16 00:21:51.243328] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.740 [2024-07-16 00:21:51.351526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.998 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.998 00:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:59.998 00:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANJewCPW44 00:18:00.257 [2024-07-16 00:21:51.686302] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.257 [2024-07-16 00:21:51.686433] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:00.257 TLSTESTn1 00:18:00.257 00:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:00.516 00:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:00.516 "subsystems": [ 00:18:00.516 { 00:18:00.516 "subsystem": "keyring", 00:18:00.516 "config": [] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "iobuf", 00:18:00.516 "config": [ 00:18:00.516 { 00:18:00.516 "method": "iobuf_set_options", 00:18:00.516 "params": { 00:18:00.516 "small_pool_count": 8192, 00:18:00.516 "large_pool_count": 1024, 00:18:00.516 "small_bufsize": 8192, 00:18:00.516 "large_bufsize": 135168 00:18:00.516 } 00:18:00.516 } 00:18:00.516 ] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "sock", 00:18:00.516 "config": [ 00:18:00.516 { 00:18:00.516 "method": "sock_set_default_impl", 00:18:00.516 "params": { 00:18:00.516 "impl_name": "posix" 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "sock_impl_set_options", 00:18:00.516 "params": { 00:18:00.516 "impl_name": "ssl", 00:18:00.516 "recv_buf_size": 4096, 00:18:00.516 "send_buf_size": 4096, 00:18:00.516 "enable_recv_pipe": true, 00:18:00.516 "enable_quickack": false, 00:18:00.516 "enable_placement_id": 0, 00:18:00.516 "enable_zerocopy_send_server": true, 00:18:00.516 "enable_zerocopy_send_client": false, 00:18:00.516 "zerocopy_threshold": 0, 00:18:00.516 "tls_version": 0, 00:18:00.516 "enable_ktls": false 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "sock_impl_set_options", 00:18:00.516 "params": { 00:18:00.516 "impl_name": "posix", 00:18:00.516 "recv_buf_size": 2097152, 00:18:00.516 "send_buf_size": 2097152, 00:18:00.516 "enable_recv_pipe": true, 00:18:00.516 "enable_quickack": false, 00:18:00.516 "enable_placement_id": 0, 00:18:00.516 "enable_zerocopy_send_server": true, 00:18:00.516 "enable_zerocopy_send_client": false, 00:18:00.516 "zerocopy_threshold": 0, 00:18:00.516 "tls_version": 0, 00:18:00.516 "enable_ktls": false 00:18:00.516 } 00:18:00.516 } 00:18:00.516 ] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "vmd", 00:18:00.516 "config": [] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "accel", 00:18:00.516 "config": [ 00:18:00.516 { 00:18:00.516 "method": "accel_set_options", 00:18:00.516 "params": { 00:18:00.516 "small_cache_size": 128, 00:18:00.516 "large_cache_size": 16, 00:18:00.516 "task_count": 2048, 00:18:00.516 "sequence_count": 2048, 00:18:00.516 "buf_count": 2048 00:18:00.516 } 00:18:00.516 } 00:18:00.516 ] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "bdev", 00:18:00.516 "config": [ 00:18:00.516 { 00:18:00.516 "method": "bdev_set_options", 00:18:00.516 "params": { 00:18:00.516 "bdev_io_pool_size": 65535, 00:18:00.516 "bdev_io_cache_size": 256, 00:18:00.516 "bdev_auto_examine": true, 00:18:00.516 "iobuf_small_cache_size": 128, 00:18:00.516 "iobuf_large_cache_size": 16 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "bdev_raid_set_options", 00:18:00.516 "params": { 00:18:00.516 "process_window_size_kb": 1024 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "bdev_iscsi_set_options", 00:18:00.516 "params": { 00:18:00.516 "timeout_sec": 30 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "bdev_nvme_set_options", 00:18:00.516 "params": { 00:18:00.516 "action_on_timeout": "none", 00:18:00.516 "timeout_us": 0, 00:18:00.516 "timeout_admin_us": 0, 00:18:00.516 "keep_alive_timeout_ms": 10000, 00:18:00.516 "arbitration_burst": 0, 00:18:00.516 "low_priority_weight": 0, 00:18:00.516 "medium_priority_weight": 0, 00:18:00.516 "high_priority_weight": 0, 00:18:00.516 "nvme_adminq_poll_period_us": 10000, 00:18:00.516 "nvme_ioq_poll_period_us": 0, 00:18:00.516 "io_queue_requests": 0, 00:18:00.516 "delay_cmd_submit": true, 00:18:00.516 "transport_retry_count": 4, 00:18:00.516 "bdev_retry_count": 3, 00:18:00.516 "transport_ack_timeout": 0, 00:18:00.516 "ctrlr_loss_timeout_sec": 0, 00:18:00.516 "reconnect_delay_sec": 0, 00:18:00.516 "fast_io_fail_timeout_sec": 0, 00:18:00.516 "disable_auto_failback": false, 00:18:00.516 "generate_uuids": false, 00:18:00.516 "transport_tos": 0, 00:18:00.516 "nvme_error_stat": false, 00:18:00.516 "rdma_srq_size": 0, 00:18:00.516 "io_path_stat": false, 00:18:00.516 "allow_accel_sequence": false, 00:18:00.516 "rdma_max_cq_size": 0, 00:18:00.516 "rdma_cm_event_timeout_ms": 0, 00:18:00.516 "dhchap_digests": [ 00:18:00.516 "sha256", 00:18:00.516 "sha384", 00:18:00.516 "sha512" 00:18:00.516 ], 00:18:00.516 "dhchap_dhgroups": [ 00:18:00.516 "null", 00:18:00.516 "ffdhe2048", 00:18:00.516 "ffdhe3072", 00:18:00.516 "ffdhe4096", 00:18:00.516 "ffdhe6144", 00:18:00.516 "ffdhe8192" 00:18:00.516 ] 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "bdev_nvme_set_hotplug", 00:18:00.516 "params": { 00:18:00.516 "period_us": 100000, 00:18:00.516 "enable": false 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "bdev_malloc_create", 00:18:00.516 "params": { 00:18:00.516 "name": "malloc0", 00:18:00.516 "num_blocks": 8192, 00:18:00.516 "block_size": 4096, 00:18:00.516 "physical_block_size": 4096, 00:18:00.516 "uuid": "4cef33f1-022e-46f5-90f4-b0eebc458411", 00:18:00.516 "optimal_io_boundary": 0 00:18:00.516 } 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "method": "bdev_wait_for_examine" 00:18:00.516 } 00:18:00.516 ] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "nbd", 00:18:00.516 "config": [] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "scheduler", 00:18:00.516 "config": [ 00:18:00.516 { 00:18:00.516 "method": "framework_set_scheduler", 00:18:00.516 "params": { 00:18:00.516 "name": "static" 00:18:00.516 } 00:18:00.516 } 00:18:00.516 ] 00:18:00.516 }, 00:18:00.516 { 00:18:00.516 "subsystem": "nvmf", 00:18:00.516 "config": [ 00:18:00.516 { 00:18:00.516 "method": "nvmf_set_config", 00:18:00.516 "params": { 00:18:00.516 "discovery_filter": "match_any", 00:18:00.516 "admin_cmd_passthru": { 00:18:00.516 "identify_ctrlr": false 00:18:00.516 } 00:18:00.516 } 00:18:00.516 }, 00:18:00.517 { 00:18:00.517 "method": "nvmf_set_max_subsystems", 00:18:00.517 "params": { 00:18:00.517 "max_subsystems": 1024 00:18:00.517 } 00:18:00.517 }, 00:18:00.517 { 00:18:00.517 "method": "nvmf_set_crdt", 00:18:00.517 "params": { 00:18:00.517 "crdt1": 0, 00:18:00.517 "crdt2": 0, 00:18:00.517 "crdt3": 0 00:18:00.517 } 00:18:00.517 }, 00:18:00.517 { 00:18:00.517 "method": "nvmf_create_transport", 00:18:00.517 "params": { 00:18:00.517 "trtype": "TCP", 00:18:00.517 "max_queue_depth": 128, 00:18:00.517 "max_io_qpairs_per_ctrlr": 127, 00:18:00.517 "in_capsule_data_size": 4096, 00:18:00.517 "max_io_size": 131072, 00:18:00.517 "io_unit_size": 131072, 00:18:00.517 "max_aq_depth": 128, 00:18:00.517 "num_shared_buffers": 511, 00:18:00.517 "buf_cache_size": 4294967295, 00:18:00.517 "dif_insert_or_strip": false, 00:18:00.517 "zcopy": false, 00:18:00.517 "c2h_success": false, 00:18:00.517 "sock_priority": 0, 00:18:00.517 "abort_timeout_sec": 1, 00:18:00.517 "ack_timeout": 0, 00:18:00.517 "data_wr_pool_size": 0 00:18:00.517 } 00:18:00.517 }, 00:18:00.517 { 00:18:00.517 "method": "nvmf_create_subsystem", 00:18:00.517 "params": { 00:18:00.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.517 "allow_any_host": false, 00:18:00.517 "serial_number": "SPDK00000000000001", 00:18:00.517 "model_number": "SPDK bdev Controller", 00:18:00.517 "max_namespaces": 10, 00:18:00.517 "min_cntlid": 1, 00:18:00.517 "max_cntlid": 65519, 00:18:00.517 "ana_reporting": false 00:18:00.517 } 00:18:00.517 }, 00:18:00.517 { 00:18:00.517 "method": "nvmf_subsystem_add_host", 00:18:00.517 "params": { 00:18:00.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.517 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.517 "psk": "/tmp/tmp.ANJewCPW44" 00:18:00.517 } 00:18:00.517 }, 00:18:00.517 { 00:18:00.517 "method": "nvmf_subsystem_add_ns", 00:18:00.517 "params": { 00:18:00.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.517 "namespace": { 00:18:00.517 "nsid": 1, 00:18:00.517 "bdev_name": "malloc0", 00:18:00.517 "nguid": "4CEF33F1022E46F590F4B0EEBC458411", 00:18:00.517 "uuid": "4cef33f1-022e-46f5-90f4-b0eebc458411", 00:18:00.517 "no_auto_visible": false 00:18:00.517 } 00:18:00.517 } 00:18:00.517 }, 00:18:00.517 { 00:18:00.517 "method": "nvmf_subsystem_add_listener", 00:18:00.517 "params": { 00:18:00.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.517 "listen_address": { 00:18:00.517 "trtype": "TCP", 00:18:00.517 "adrfam": "IPv4", 00:18:00.517 "traddr": "10.0.0.2", 00:18:00.517 "trsvcid": "4420" 00:18:00.517 }, 00:18:00.517 "secure_channel": true 00:18:00.517 } 00:18:00.517 } 00:18:00.517 ] 00:18:00.517 } 00:18:00.517 ] 00:18:00.517 }' 00:18:00.517 00:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:01.088 00:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:01.088 "subsystems": [ 00:18:01.088 { 00:18:01.088 "subsystem": "keyring", 00:18:01.088 "config": [] 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "subsystem": "iobuf", 00:18:01.088 "config": [ 00:18:01.088 { 00:18:01.088 "method": "iobuf_set_options", 00:18:01.088 "params": { 00:18:01.088 "small_pool_count": 8192, 00:18:01.088 "large_pool_count": 1024, 00:18:01.088 "small_bufsize": 8192, 00:18:01.088 "large_bufsize": 135168 00:18:01.088 } 00:18:01.088 } 00:18:01.088 ] 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "subsystem": "sock", 00:18:01.088 "config": [ 00:18:01.088 { 00:18:01.088 "method": "sock_set_default_impl", 00:18:01.088 "params": { 00:18:01.088 "impl_name": "posix" 00:18:01.088 } 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "method": "sock_impl_set_options", 00:18:01.088 "params": { 00:18:01.088 "impl_name": "ssl", 00:18:01.088 "recv_buf_size": 4096, 00:18:01.088 "send_buf_size": 4096, 00:18:01.088 "enable_recv_pipe": true, 00:18:01.088 "enable_quickack": false, 00:18:01.088 "enable_placement_id": 0, 00:18:01.088 "enable_zerocopy_send_server": true, 00:18:01.088 "enable_zerocopy_send_client": false, 00:18:01.088 "zerocopy_threshold": 0, 00:18:01.088 "tls_version": 0, 00:18:01.088 "enable_ktls": false 00:18:01.088 } 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "method": "sock_impl_set_options", 00:18:01.088 "params": { 00:18:01.088 "impl_name": "posix", 00:18:01.088 "recv_buf_size": 2097152, 00:18:01.088 "send_buf_size": 2097152, 00:18:01.088 "enable_recv_pipe": true, 00:18:01.088 "enable_quickack": false, 00:18:01.088 "enable_placement_id": 0, 00:18:01.088 "enable_zerocopy_send_server": true, 00:18:01.088 "enable_zerocopy_send_client": false, 00:18:01.088 "zerocopy_threshold": 0, 00:18:01.088 "tls_version": 0, 00:18:01.088 "enable_ktls": false 00:18:01.088 } 00:18:01.088 } 00:18:01.088 ] 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "subsystem": "vmd", 00:18:01.088 "config": [] 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "subsystem": "accel", 00:18:01.088 "config": [ 00:18:01.088 { 00:18:01.088 "method": "accel_set_options", 00:18:01.088 "params": { 00:18:01.088 "small_cache_size": 128, 00:18:01.088 "large_cache_size": 16, 00:18:01.088 "task_count": 2048, 00:18:01.088 "sequence_count": 2048, 00:18:01.088 "buf_count": 2048 00:18:01.088 } 00:18:01.088 } 00:18:01.088 ] 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "subsystem": "bdev", 00:18:01.088 "config": [ 00:18:01.088 { 00:18:01.088 "method": "bdev_set_options", 00:18:01.088 "params": { 00:18:01.088 "bdev_io_pool_size": 65535, 00:18:01.088 "bdev_io_cache_size": 256, 00:18:01.088 "bdev_auto_examine": true, 00:18:01.088 "iobuf_small_cache_size": 128, 00:18:01.088 "iobuf_large_cache_size": 16 00:18:01.088 } 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "method": "bdev_raid_set_options", 00:18:01.088 "params": { 00:18:01.088 "process_window_size_kb": 1024 00:18:01.088 } 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "method": "bdev_iscsi_set_options", 00:18:01.088 "params": { 00:18:01.088 "timeout_sec": 30 00:18:01.088 } 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "method": "bdev_nvme_set_options", 00:18:01.088 "params": { 00:18:01.088 "action_on_timeout": "none", 00:18:01.088 "timeout_us": 0, 00:18:01.088 "timeout_admin_us": 0, 00:18:01.088 "keep_alive_timeout_ms": 10000, 00:18:01.088 "arbitration_burst": 0, 00:18:01.088 "low_priority_weight": 0, 00:18:01.088 "medium_priority_weight": 0, 00:18:01.088 "high_priority_weight": 0, 00:18:01.088 "nvme_adminq_poll_period_us": 10000, 00:18:01.088 "nvme_ioq_poll_period_us": 0, 00:18:01.088 "io_queue_requests": 512, 00:18:01.088 "delay_cmd_submit": true, 00:18:01.088 "transport_retry_count": 4, 00:18:01.088 "bdev_retry_count": 3, 00:18:01.088 "transport_ack_timeout": 0, 00:18:01.088 "ctrlr_loss_timeout_sec": 0, 00:18:01.088 "reconnect_delay_sec": 0, 00:18:01.088 "fast_io_fail_timeout_sec": 0, 00:18:01.088 "disable_auto_failback": false, 00:18:01.088 "generate_uuids": false, 00:18:01.088 "transport_tos": 0, 00:18:01.088 "nvme_error_stat": false, 00:18:01.088 "rdma_srq_size": 0, 00:18:01.088 "io_path_stat": false, 00:18:01.088 "allow_accel_sequence": false, 00:18:01.088 "rdma_max_cq_size": 0, 00:18:01.088 "rdma_cm_event_timeout_ms": 0, 00:18:01.088 "dhchap_digests": [ 00:18:01.088 "sha256", 00:18:01.088 "sha384", 00:18:01.088 "sha512" 00:18:01.088 ], 00:18:01.088 "dhchap_dhgroups": [ 00:18:01.088 "null", 00:18:01.088 "ffdhe2048", 00:18:01.088 "ffdhe3072", 00:18:01.088 "ffdhe4096", 00:18:01.088 "ffdhe6144", 00:18:01.088 "ffdhe8192" 00:18:01.088 ] 00:18:01.088 } 00:18:01.088 }, 00:18:01.088 { 00:18:01.088 "method": "bdev_nvme_attach_controller", 00:18:01.088 "params": { 00:18:01.088 "name": "TLSTEST", 00:18:01.088 "trtype": "TCP", 00:18:01.088 "adrfam": "IPv4", 00:18:01.088 "traddr": "10.0.0.2", 00:18:01.088 "trsvcid": "4420", 00:18:01.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.089 "prchk_reftag": false, 00:18:01.089 "prchk_guard": false, 00:18:01.089 "ctrlr_loss_timeout_sec": 0, 00:18:01.089 "reconnect_delay_sec": 0, 00:18:01.089 "fast_io_fail_timeout_sec": 0, 00:18:01.089 "psk": "/tmp/tmp.ANJewCPW44", 00:18:01.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.089 "hdgst": false, 00:18:01.089 "ddgst": false 00:18:01.089 } 00:18:01.089 }, 00:18:01.089 { 00:18:01.089 "method": "bdev_nvme_set_hotplug", 00:18:01.089 "params": { 00:18:01.089 "period_us": 100000, 00:18:01.089 "enable": false 00:18:01.089 } 00:18:01.089 }, 00:18:01.089 { 00:18:01.089 "method": "bdev_wait_for_examine" 00:18:01.089 } 00:18:01.089 ] 00:18:01.089 }, 00:18:01.089 { 00:18:01.089 "subsystem": "nbd", 00:18:01.089 "config": [] 00:18:01.089 } 00:18:01.089 ] 00:18:01.089 }' 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2309040 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2309040 ']' 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2309040 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2309040 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2309040' 00:18:01.089 killing process with pid 2309040 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2309040 00:18:01.089 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.089 00:18:01.089 Latency(us) 00:18:01.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.089 =================================================================================================================== 00:18:01.089 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.089 [2024-07-16 00:21:52.454069] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:01.089 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2309040 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2308766 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2308766 ']' 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2308766 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2308766 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2308766' 00:18:01.349 killing process with pid 2308766 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2308766 00:18:01.349 [2024-07-16 00:21:52.745392] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:01.349 00:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2308766 00:18:01.610 00:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:01.610 00:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.610 00:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:01.610 "subsystems": [ 00:18:01.610 { 00:18:01.610 "subsystem": "keyring", 00:18:01.610 "config": [] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "iobuf", 00:18:01.610 "config": [ 00:18:01.610 { 00:18:01.610 "method": "iobuf_set_options", 00:18:01.610 "params": { 00:18:01.610 "small_pool_count": 8192, 00:18:01.610 "large_pool_count": 1024, 00:18:01.610 "small_bufsize": 8192, 00:18:01.610 "large_bufsize": 135168 00:18:01.610 } 00:18:01.610 } 00:18:01.610 ] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "sock", 00:18:01.610 "config": [ 00:18:01.610 { 00:18:01.610 "method": "sock_set_default_impl", 00:18:01.610 "params": { 00:18:01.610 "impl_name": "posix" 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "sock_impl_set_options", 00:18:01.610 "params": { 00:18:01.610 "impl_name": "ssl", 00:18:01.610 "recv_buf_size": 4096, 00:18:01.610 "send_buf_size": 4096, 00:18:01.610 "enable_recv_pipe": true, 00:18:01.610 "enable_quickack": false, 00:18:01.610 "enable_placement_id": 0, 00:18:01.610 "enable_zerocopy_send_server": true, 00:18:01.610 "enable_zerocopy_send_client": false, 00:18:01.610 "zerocopy_threshold": 0, 00:18:01.610 "tls_version": 0, 00:18:01.610 "enable_ktls": false 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "sock_impl_set_options", 00:18:01.610 "params": { 00:18:01.610 "impl_name": "posix", 00:18:01.610 "recv_buf_size": 2097152, 00:18:01.610 "send_buf_size": 2097152, 00:18:01.610 "enable_recv_pipe": true, 00:18:01.610 "enable_quickack": false, 00:18:01.610 "enable_placement_id": 0, 00:18:01.610 "enable_zerocopy_send_server": true, 00:18:01.610 "enable_zerocopy_send_client": false, 00:18:01.610 "zerocopy_threshold": 0, 00:18:01.610 "tls_version": 0, 00:18:01.610 "enable_ktls": false 00:18:01.610 } 00:18:01.610 } 00:18:01.610 ] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "vmd", 00:18:01.610 "config": [] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "accel", 00:18:01.610 "config": [ 00:18:01.610 { 00:18:01.610 "method": "accel_set_options", 00:18:01.610 "params": { 00:18:01.610 "small_cache_size": 128, 00:18:01.610 "large_cache_size": 16, 00:18:01.610 "task_count": 2048, 00:18:01.610 "sequence_count": 2048, 00:18:01.610 "buf_count": 2048 00:18:01.610 } 00:18:01.610 } 00:18:01.610 ] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "bdev", 00:18:01.610 "config": [ 00:18:01.610 { 00:18:01.610 "method": "bdev_set_options", 00:18:01.610 "params": { 00:18:01.610 "bdev_io_pool_size": 65535, 00:18:01.610 "bdev_io_cache_size": 256, 00:18:01.610 "bdev_auto_examine": true, 00:18:01.610 "iobuf_small_cache_size": 128, 00:18:01.610 "iobuf_large_cache_size": 16 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "bdev_raid_set_options", 00:18:01.610 "params": { 00:18:01.610 "process_window_size_kb": 1024 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "bdev_iscsi_set_options", 00:18:01.610 "params": { 00:18:01.610 "timeout_sec": 30 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "bdev_nvme_set_options", 00:18:01.610 "params": { 00:18:01.610 "action_on_timeout": "none", 00:18:01.610 "timeout_us": 0, 00:18:01.610 "timeout_admin_us": 0, 00:18:01.610 "keep_alive_timeout_ms": 10000, 00:18:01.610 "arbitration_burst": 0, 00:18:01.610 "low_priority_weight": 0, 00:18:01.610 "medium_priority_weight": 0, 00:18:01.610 "high_priority_weight": 0, 00:18:01.610 "nvme_adminq_poll_period_us": 10000, 00:18:01.610 "nvme_ioq_poll_period_us": 0, 00:18:01.610 "io_queue_requests": 0, 00:18:01.610 "delay_cmd_submit": true, 00:18:01.610 "transport_retry_count": 4, 00:18:01.610 "bdev_retry_count": 3, 00:18:01.610 "transport_ack_timeout": 0, 00:18:01.610 "ctrlr_loss_timeout_sec": 0, 00:18:01.610 "reconnect_delay_sec": 0, 00:18:01.610 "fast_io_fail_timeout_sec": 0, 00:18:01.610 "disable_auto_failback": false, 00:18:01.610 "generate_uuids": false, 00:18:01.610 "transport_tos": 0, 00:18:01.610 "nvme_error_stat": false, 00:18:01.610 "rdma_srq_size": 0, 00:18:01.610 "io_path_stat": false, 00:18:01.610 "allow_accel_sequence": false, 00:18:01.610 "rdma_max_cq_size": 0, 00:18:01.610 "rdma_cm_event_timeout_ms": 0, 00:18:01.610 "dhchap_digests": [ 00:18:01.610 "sha256", 00:18:01.610 "sha384", 00:18:01.610 "sha512" 00:18:01.610 ], 00:18:01.610 "dhchap_dhgroups": [ 00:18:01.610 "null", 00:18:01.610 "ffdhe2048", 00:18:01.610 "ffdhe3072", 00:18:01.610 "ffdhe4096", 00:18:01.610 "ffdhe 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.610 6144", 00:18:01.610 "ffdhe8192" 00:18:01.610 ] 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "bdev_nvme_set_hotplug", 00:18:01.610 "params": { 00:18:01.610 "period_us": 100000, 00:18:01.610 "enable": false 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "bdev_malloc_create", 00:18:01.610 "params": { 00:18:01.610 "name": "malloc0", 00:18:01.610 "num_blocks": 8192, 00:18:01.610 "block_size": 4096, 00:18:01.610 "physical_block_size": 4096, 00:18:01.610 "uuid": "4cef33f1-022e-46f5-90f4-b0eebc458411", 00:18:01.610 "optimal_io_boundary": 0 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "bdev_wait_for_examine" 00:18:01.610 } 00:18:01.610 ] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "nbd", 00:18:01.610 "config": [] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "scheduler", 00:18:01.610 "config": [ 00:18:01.610 { 00:18:01.610 "method": "framework_set_scheduler", 00:18:01.610 "params": { 00:18:01.610 "name": "static" 00:18:01.610 } 00:18:01.610 } 00:18:01.610 ] 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "subsystem": "nvmf", 00:18:01.610 "config": [ 00:18:01.610 { 00:18:01.610 "method": "nvmf_set_config", 00:18:01.610 "params": { 00:18:01.610 "discovery_filter": "match_any", 00:18:01.610 "admin_cmd_passthru": { 00:18:01.610 "identify_ctrlr": false 00:18:01.610 } 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "nvmf_set_max_subsystems", 00:18:01.610 "params": { 00:18:01.610 "max_subsystems": 1024 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "nvmf_set_crdt", 00:18:01.610 "params": { 00:18:01.610 "crdt1": 0, 00:18:01.610 "crdt2": 0, 00:18:01.610 "crdt3": 0 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "nvmf_create_transport", 00:18:01.610 "params": { 00:18:01.610 "trtype": "TCP", 00:18:01.610 "max_queue_depth": 128, 00:18:01.610 "max_io_qpairs_per_ctrlr": 127, 00:18:01.610 "in_capsule_data_size": 4096, 00:18:01.610 "max_io_size": 131072, 00:18:01.610 "io_unit_size": 131072, 00:18:01.610 "max_aq_depth": 128, 00:18:01.610 "num_shared_buffers": 511, 00:18:01.610 "buf_cache_size": 4294967295, 00:18:01.610 "dif_insert_or_strip": false, 00:18:01.610 "zcopy": false, 00:18:01.610 "c2h_success": false, 00:18:01.610 "sock_priority": 0, 00:18:01.610 "abort_timeout_sec": 1, 00:18:01.610 "ack_timeout": 0, 00:18:01.610 "data_wr_pool_size": 0 00:18:01.610 } 00:18:01.610 }, 00:18:01.610 { 00:18:01.610 "method": "nvmf_create_subsystem", 00:18:01.610 "params": { 00:18:01.610 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.610 "allow_any_host": false, 00:18:01.611 "serial_number": "SPDK00000000000001", 00:18:01.611 "model_number": "SPDK bdev Controller", 00:18:01.611 "max_namespaces": 10, 00:18:01.611 "min_cntlid": 1, 00:18:01.611 "max_cntlid": 65519, 00:18:01.611 "ana_reporting": false 00:18:01.611 } 00:18:01.611 }, 00:18:01.611 { 00:18:01.611 "method": "nvmf_subsystem_add_host", 00:18:01.611 "params": { 00:18:01.611 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.611 "host": "nqn.2016-06.io.spdk:host1", 00:18:01.611 "psk": "/tmp/tmp.ANJewCPW44" 00:18:01.611 } 00:18:01.611 }, 00:18:01.611 { 00:18:01.611 "method": "nvmf_subsystem_add_ns", 00:18:01.611 "params": { 00:18:01.611 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.611 "namespace": { 00:18:01.611 "nsid": 1, 00:18:01.611 "bdev_name": "malloc0", 00:18:01.611 "nguid": "4CEF33F1022E46F590F4B0EEBC458411", 00:18:01.611 "uuid": "4cef33f1-022e-46f5-90f4-b0eebc458411", 00:18:01.611 "no_auto_visible": false 00:18:01.611 } 00:18:01.611 } 00:18:01.611 }, 00:18:01.611 { 00:18:01.611 "method": "nvmf_subsystem_add_listener", 00:18:01.611 "params": { 00:18:01.611 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.611 "listen_address": { 00:18:01.611 "trtype": "TCP", 00:18:01.611 "adrfam": "IPv4", 00:18:01.611 "traddr": "10.0.0.2", 00:18:01.611 "trsvcid": "4420" 00:18:01.611 }, 00:18:01.611 "secure_channel": true 00:18:01.611 } 00:18:01.611 } 00:18:01.611 ] 00:18:01.611 } 00:18:01.611 ] 00:18:01.611 }' 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2309318 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2309318 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2309318 ']' 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.611 00:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.611 [2024-07-16 00:21:53.103243] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:01.611 [2024-07-16 00:21:53.103327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.611 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.611 [2024-07-16 00:21:53.174089] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.870 [2024-07-16 00:21:53.283323] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.870 [2024-07-16 00:21:53.283387] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.870 [2024-07-16 00:21:53.283401] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.870 [2024-07-16 00:21:53.283412] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.870 [2024-07-16 00:21:53.283421] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.870 [2024-07-16 00:21:53.283507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.128 [2024-07-16 00:21:53.523743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.128 [2024-07-16 00:21:53.539683] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:02.128 [2024-07-16 00:21:53.555754] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.128 [2024-07-16 00:21:53.563101] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2309364 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2309364 /var/tmp/bdevperf.sock 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2309364 ']' 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.693 00:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:02.693 "subsystems": [ 00:18:02.693 { 00:18:02.693 "subsystem": "keyring", 00:18:02.693 "config": [] 00:18:02.693 }, 00:18:02.693 { 00:18:02.693 "subsystem": "iobuf", 00:18:02.693 "config": [ 00:18:02.693 { 00:18:02.693 "method": "iobuf_set_options", 00:18:02.693 "params": { 00:18:02.693 "small_pool_count": 8192, 00:18:02.693 "large_pool_count": 1024, 00:18:02.693 "small_bufsize": 8192, 00:18:02.693 "large_bufsize": 135168 00:18:02.693 } 00:18:02.693 } 00:18:02.693 ] 00:18:02.693 }, 00:18:02.693 { 00:18:02.693 "subsystem": "sock", 00:18:02.693 "config": [ 00:18:02.694 { 00:18:02.694 "method": "sock_set_default_impl", 00:18:02.694 "params": { 00:18:02.694 "impl_name": "posix" 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "sock_impl_set_options", 00:18:02.694 "params": { 00:18:02.694 "impl_name": "ssl", 00:18:02.694 "recv_buf_size": 4096, 00:18:02.694 "send_buf_size": 4096, 00:18:02.694 "enable_recv_pipe": true, 00:18:02.694 "enable_quickack": false, 00:18:02.694 "enable_placement_id": 0, 00:18:02.694 "enable_zerocopy_send_server": true, 00:18:02.694 "enable_zerocopy_send_client": false, 00:18:02.694 "zerocopy_threshold": 0, 00:18:02.694 "tls_version": 0, 00:18:02.694 "enable_ktls": false 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "sock_impl_set_options", 00:18:02.694 "params": { 00:18:02.694 "impl_name": "posix", 00:18:02.694 "recv_buf_size": 2097152, 00:18:02.694 "send_buf_size": 2097152, 00:18:02.694 "enable_recv_pipe": true, 00:18:02.694 "enable_quickack": false, 00:18:02.694 "enable_placement_id": 0, 00:18:02.694 "enable_zerocopy_send_server": true, 00:18:02.694 "enable_zerocopy_send_client": false, 00:18:02.694 "zerocopy_threshold": 0, 00:18:02.694 "tls_version": 0, 00:18:02.694 "enable_ktls": false 00:18:02.694 } 00:18:02.694 } 00:18:02.694 ] 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "subsystem": "vmd", 00:18:02.694 "config": [] 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "subsystem": "accel", 00:18:02.694 "config": [ 00:18:02.694 { 00:18:02.694 "method": "accel_set_options", 00:18:02.694 "params": { 00:18:02.694 "small_cache_size": 128, 00:18:02.694 "large_cache_size": 16, 00:18:02.694 "task_count": 2048, 00:18:02.694 "sequence_count": 2048, 00:18:02.694 "buf_count": 2048 00:18:02.694 } 00:18:02.694 } 00:18:02.694 ] 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "subsystem": "bdev", 00:18:02.694 "config": [ 00:18:02.694 { 00:18:02.694 "method": "bdev_set_options", 00:18:02.694 "params": { 00:18:02.694 "bdev_io_pool_size": 65535, 00:18:02.694 "bdev_io_cache_size": 256, 00:18:02.694 "bdev_auto_examine": true, 00:18:02.694 "iobuf_small_cache_size": 128, 00:18:02.694 "iobuf_large_cache_size": 16 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "bdev_raid_set_options", 00:18:02.694 "params": { 00:18:02.694 "process_window_size_kb": 1024 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "bdev_iscsi_set_options", 00:18:02.694 "params": { 00:18:02.694 "timeout_sec": 30 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "bdev_nvme_set_options", 00:18:02.694 "params": { 00:18:02.694 "action_on_timeout": "none", 00:18:02.694 "timeout_us": 0, 00:18:02.694 "timeout_admin_us": 0, 00:18:02.694 "keep_alive_timeout_ms": 10000, 00:18:02.694 "arbitration_burst": 0, 00:18:02.694 "low_priority_weight": 0, 00:18:02.694 "medium_priority_weight": 0, 00:18:02.694 "high_priority_weight": 0, 00:18:02.694 "nvme_adminq_poll_period_us": 10000, 00:18:02.694 "nvme_ioq_poll_period_us": 0, 00:18:02.694 "io_queue_requests": 512, 00:18:02.694 "delay_cmd_submit": true, 00:18:02.694 "transport_retry_count": 4, 00:18:02.694 "bdev_retry_count": 3, 00:18:02.694 "transport_ack_timeout": 0, 00:18:02.694 "ctrlr_loss_timeout_sec": 0, 00:18:02.694 "reconnect_delay_sec": 0, 00:18:02.694 "fast_io_fail_timeout_sec": 0, 00:18:02.694 "disable_auto_failback": false, 00:18:02.694 "generate_uuids": false, 00:18:02.694 "transport_tos": 0, 00:18:02.694 "nvme_error_stat": false, 00:18:02.694 "rdma_srq_size": 0, 00:18:02.694 "io_path_stat": false, 00:18:02.694 "allow_accel_sequence": false, 00:18:02.694 "rdma_max_cq_size": 0, 00:18:02.694 "rdma_cm_event_timeout_ms": 0, 00:18:02.694 "dhchap_digests": [ 00:18:02.694 "sha256", 00:18:02.694 "sha384", 00:18:02.694 "sha512" 00:18:02.694 ], 00:18:02.694 "dhchap_dhgroups": [ 00:18:02.694 "null", 00:18:02.694 "ffdhe2048", 00:18:02.694 "ffdhe3072", 00:18:02.694 "ffdhe4096", 00:18:02.694 "ffdhe6144", 00:18:02.694 "ffdhe8192" 00:18:02.694 ] 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "bdev_nvme_attach_controller", 00:18:02.694 "params": { 00:18:02.694 "name": "TLSTEST", 00:18:02.694 "trtype": "TCP", 00:18:02.694 "adrfam": "IPv4", 00:18:02.694 "traddr": "10.0.0.2", 00:18:02.694 "trsvcid": "4420", 00:18:02.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.694 "prchk_reftag": false, 00:18:02.694 "prchk_guard": false, 00:18:02.694 "ctrlr_loss_timeout_sec": 0, 00:18:02.694 "reconnect_delay_sec": 0, 00:18:02.694 "fast_io_fail_timeout_sec": 0, 00:18:02.694 "psk": "/tmp/tmp.ANJewCPW44", 00:18:02.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.694 "hdgst": false, 00:18:02.694 "ddgst": false 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "bdev_nvme_set_hotplug", 00:18:02.694 "params": { 00:18:02.694 "period_us": 100000, 00:18:02.694 "enable": false 00:18:02.694 } 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "method": "bdev_wait_for_examine" 00:18:02.694 } 00:18:02.694 ] 00:18:02.694 }, 00:18:02.694 { 00:18:02.694 "subsystem": "nbd", 00:18:02.694 "config": [] 00:18:02.694 } 00:18:02.694 ] 00:18:02.694 }' 00:18:02.694 00:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 [2024-07-16 00:21:54.112280] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:02.694 [2024-07-16 00:21:54.112390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309364 ] 00:18:02.694 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.694 [2024-07-16 00:21:54.173838] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.694 [2024-07-16 00:21:54.279893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.954 [2024-07-16 00:21:54.443213] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.954 [2024-07-16 00:21:54.443353] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:03.520 00:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.520 00:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:03.520 00:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.779 Running I/O for 10 seconds... 00:18:13.788 00:18:13.788 Latency(us) 00:18:13.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.788 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.788 Verification LBA range: start 0x0 length 0x2000 00:18:13.788 TLSTESTn1 : 10.08 1209.00 4.72 0.00 0.00 105509.89 10485.76 146800.64 00:18:13.788 =================================================================================================================== 00:18:13.788 Total : 1209.00 4.72 0.00 0.00 105509.89 10485.76 146800.64 00:18:13.788 0 00:18:13.788 00:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.788 00:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2309364 00:18:13.788 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2309364 ']' 00:18:13.788 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2309364 00:18:13.788 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.788 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.789 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2309364 00:18:13.789 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:13.789 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:13.789 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2309364' 00:18:13.789 killing process with pid 2309364 00:18:13.789 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2309364 00:18:13.789 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.789 00:18:13.789 Latency(us) 00:18:13.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.789 =================================================================================================================== 00:18:13.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.789 [2024-07-16 00:22:05.317038] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:13.789 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2309364 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2309318 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2309318 ']' 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2309318 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2309318 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2309318' 00:18:14.048 killing process with pid 2309318 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2309318 00:18:14.048 [2024-07-16 00:22:05.601363] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:14.048 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2309318 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2310804 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2310804 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2310804 ']' 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.306 00:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.564 [2024-07-16 00:22:05.949315] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:14.564 [2024-07-16 00:22:05.949391] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.564 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.564 [2024-07-16 00:22:06.010604] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.564 [2024-07-16 00:22:06.116800] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.564 [2024-07-16 00:22:06.116852] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.564 [2024-07-16 00:22:06.116865] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.564 [2024-07-16 00:22:06.116882] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.564 [2024-07-16 00:22:06.116909] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.564 [2024-07-16 00:22:06.116935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.821 00:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.821 00:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.821 00:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.822 00:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:14.822 00:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.822 00:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.822 00:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ANJewCPW44 00:18:14.822 00:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ANJewCPW44 00:18:14.822 00:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.079 [2024-07-16 00:22:06.519022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.079 00:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.337 00:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.595 [2024-07-16 00:22:07.024363] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.595 [2024-07-16 00:22:07.024616] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.595 00:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.852 malloc0 00:18:15.852 00:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.110 00:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANJewCPW44 00:18:16.368 [2024-07-16 00:22:07.902581] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2311088 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2311088 /var/tmp/bdevperf.sock 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2311088 ']' 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.368 00:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.368 [2024-07-16 00:22:07.961440] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:16.368 [2024-07-16 00:22:07.961523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311088 ] 00:18:16.626 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.626 [2024-07-16 00:22:08.020961] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.626 [2024-07-16 00:22:08.128611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.626 00:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.626 00:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.626 00:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ANJewCPW44 00:18:16.884 00:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:17.142 [2024-07-16 00:22:08.703903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.401 nvme0n1 00:18:17.401 00:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.401 Running I/O for 1 seconds... 00:18:18.775 00:18:18.775 Latency(us) 00:18:18.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:18.775 Verification LBA range: start 0x0 length 0x2000 00:18:18.775 nvme0n1 : 1.07 1645.66 6.43 0.00 0.00 75816.34 10631.40 109517.94 00:18:18.775 =================================================================================================================== 00:18:18.775 Total : 1645.66 6.43 0.00 0.00 75816.34 10631.40 109517.94 00:18:18.775 0 00:18:18.775 00:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2311088 00:18:18.775 00:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2311088 ']' 00:18:18.775 00:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2311088 00:18:18.775 00:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.775 00:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.775 00:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311088 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311088' 00:18:18.775 killing process with pid 2311088 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2311088 00:18:18.775 Received shutdown signal, test time was about 1.000000 seconds 00:18:18.775 00:18:18.775 Latency(us) 00:18:18.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.775 =================================================================================================================== 00:18:18.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2311088 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2310804 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2310804 ']' 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2310804 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2310804 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2310804' 00:18:18.775 killing process with pid 2310804 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2310804 00:18:18.775 [2024-07-16 00:22:10.323650] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:18.775 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2310804 00:18:19.034 00:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:19.034 00:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.034 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.034 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.034 00:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2311370 00:18:19.034 00:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:19.035 00:22:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2311370 00:18:19.035 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2311370 ']' 00:18:19.035 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.035 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.035 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.035 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.035 00:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.295 [2024-07-16 00:22:10.681593] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:19.295 [2024-07-16 00:22:10.681697] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.295 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.295 [2024-07-16 00:22:10.749711] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.295 [2024-07-16 00:22:10.861707] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.295 [2024-07-16 00:22:10.861771] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.295 [2024-07-16 00:22:10.861794] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.295 [2024-07-16 00:22:10.861807] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.295 [2024-07-16 00:22:10.861819] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.295 [2024-07-16 00:22:10.861850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.230 [2024-07-16 00:22:11.692824] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.230 malloc0 00:18:20.230 [2024-07-16 00:22:11.724936] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.230 [2024-07-16 00:22:11.725202] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2311523 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:20.230 00:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2311523 /var/tmp/bdevperf.sock 00:18:20.231 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2311523 ']' 00:18:20.231 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.231 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.231 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.231 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.231 00:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.231 [2024-07-16 00:22:11.797119] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:20.231 [2024-07-16 00:22:11.797198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311523 ] 00:18:20.231 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.487 [2024-07-16 00:22:11.858303] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.487 [2024-07-16 00:22:11.974399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.487 00:22:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.487 00:22:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.487 00:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ANJewCPW44 00:18:21.055 00:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:21.055 [2024-07-16 00:22:12.588524] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.055 nvme0n1 00:18:21.314 00:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.314 Running I/O for 1 seconds... 00:18:22.248 00:18:22.248 Latency(us) 00:18:22.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.248 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:22.248 Verification LBA range: start 0x0 length 0x2000 00:18:22.248 nvme0n1 : 1.06 1666.37 6.51 0.00 0.00 74974.79 5873.97 102527.43 00:18:22.248 =================================================================================================================== 00:18:22.248 Total : 1666.37 6.51 0.00 0.00 74974.79 5873.97 102527.43 00:18:22.248 0 00:18:22.505 00:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:22.505 00:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.505 00:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.505 00:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.505 00:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:22.505 "subsystems": [ 00:18:22.505 { 00:18:22.505 "subsystem": "keyring", 00:18:22.505 "config": [ 00:18:22.505 { 00:18:22.505 "method": "keyring_file_add_key", 00:18:22.505 "params": { 00:18:22.505 "name": "key0", 00:18:22.505 "path": "/tmp/tmp.ANJewCPW44" 00:18:22.505 } 00:18:22.505 } 00:18:22.505 ] 00:18:22.505 }, 00:18:22.505 { 00:18:22.505 "subsystem": "iobuf", 00:18:22.505 "config": [ 00:18:22.505 { 00:18:22.505 "method": "iobuf_set_options", 00:18:22.505 "params": { 00:18:22.505 "small_pool_count": 8192, 00:18:22.505 "large_pool_count": 1024, 00:18:22.505 "small_bufsize": 8192, 00:18:22.505 "large_bufsize": 135168 00:18:22.505 } 00:18:22.505 } 00:18:22.505 ] 00:18:22.505 }, 00:18:22.505 { 00:18:22.505 "subsystem": "sock", 00:18:22.505 "config": [ 00:18:22.505 { 00:18:22.505 "method": "sock_set_default_impl", 00:18:22.505 "params": { 00:18:22.505 "impl_name": "posix" 00:18:22.505 } 00:18:22.505 }, 00:18:22.505 { 00:18:22.505 "method": "sock_impl_set_options", 00:18:22.505 "params": { 00:18:22.505 "impl_name": "ssl", 00:18:22.505 "recv_buf_size": 4096, 00:18:22.505 "send_buf_size": 4096, 00:18:22.505 "enable_recv_pipe": true, 00:18:22.505 "enable_quickack": false, 00:18:22.505 "enable_placement_id": 0, 00:18:22.505 "enable_zerocopy_send_server": true, 00:18:22.505 "enable_zerocopy_send_client": false, 00:18:22.505 "zerocopy_threshold": 0, 00:18:22.505 "tls_version": 0, 00:18:22.505 "enable_ktls": false 00:18:22.505 } 00:18:22.505 }, 00:18:22.505 { 00:18:22.505 "method": "sock_impl_set_options", 00:18:22.505 "params": { 00:18:22.505 "impl_name": "posix", 00:18:22.505 "recv_buf_size": 2097152, 00:18:22.505 "send_buf_size": 2097152, 00:18:22.505 "enable_recv_pipe": true, 00:18:22.505 "enable_quickack": false, 00:18:22.505 "enable_placement_id": 0, 00:18:22.505 "enable_zerocopy_send_server": true, 00:18:22.505 "enable_zerocopy_send_client": false, 00:18:22.505 "zerocopy_threshold": 0, 00:18:22.505 "tls_version": 0, 00:18:22.505 "enable_ktls": false 00:18:22.505 } 00:18:22.505 } 00:18:22.505 ] 00:18:22.505 }, 00:18:22.505 { 00:18:22.505 "subsystem": "vmd", 00:18:22.505 "config": [] 00:18:22.505 }, 00:18:22.505 { 00:18:22.505 "subsystem": "accel", 00:18:22.505 "config": [ 00:18:22.505 { 00:18:22.505 "method": "accel_set_options", 00:18:22.505 "params": { 00:18:22.505 "small_cache_size": 128, 00:18:22.505 "large_cache_size": 16, 00:18:22.505 "task_count": 2048, 00:18:22.505 "sequence_count": 2048, 00:18:22.505 "buf_count": 2048 00:18:22.505 } 00:18:22.505 } 00:18:22.505 ] 00:18:22.505 }, 00:18:22.505 { 00:18:22.505 "subsystem": "bdev", 00:18:22.505 "config": [ 00:18:22.505 { 00:18:22.505 "method": "bdev_set_options", 00:18:22.505 "params": { 00:18:22.505 "bdev_io_pool_size": 65535, 00:18:22.505 "bdev_io_cache_size": 256, 00:18:22.505 "bdev_auto_examine": true, 00:18:22.505 "iobuf_small_cache_size": 128, 00:18:22.505 "iobuf_large_cache_size": 16 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "bdev_raid_set_options", 00:18:22.506 "params": { 00:18:22.506 "process_window_size_kb": 1024 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "bdev_iscsi_set_options", 00:18:22.506 "params": { 00:18:22.506 "timeout_sec": 30 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "bdev_nvme_set_options", 00:18:22.506 "params": { 00:18:22.506 "action_on_timeout": "none", 00:18:22.506 "timeout_us": 0, 00:18:22.506 "timeout_admin_us": 0, 00:18:22.506 "keep_alive_timeout_ms": 10000, 00:18:22.506 "arbitration_burst": 0, 00:18:22.506 "low_priority_weight": 0, 00:18:22.506 "medium_priority_weight": 0, 00:18:22.506 "high_priority_weight": 0, 00:18:22.506 "nvme_adminq_poll_period_us": 10000, 00:18:22.506 "nvme_ioq_poll_period_us": 0, 00:18:22.506 "io_queue_requests": 0, 00:18:22.506 "delay_cmd_submit": true, 00:18:22.506 "transport_retry_count": 4, 00:18:22.506 "bdev_retry_count": 3, 00:18:22.506 "transport_ack_timeout": 0, 00:18:22.506 "ctrlr_loss_timeout_sec": 0, 00:18:22.506 "reconnect_delay_sec": 0, 00:18:22.506 "fast_io_fail_timeout_sec": 0, 00:18:22.506 "disable_auto_failback": false, 00:18:22.506 "generate_uuids": false, 00:18:22.506 "transport_tos": 0, 00:18:22.506 "nvme_error_stat": false, 00:18:22.506 "rdma_srq_size": 0, 00:18:22.506 "io_path_stat": false, 00:18:22.506 "allow_accel_sequence": false, 00:18:22.506 "rdma_max_cq_size": 0, 00:18:22.506 "rdma_cm_event_timeout_ms": 0, 00:18:22.506 "dhchap_digests": [ 00:18:22.506 "sha256", 00:18:22.506 "sha384", 00:18:22.506 "sha512" 00:18:22.506 ], 00:18:22.506 "dhchap_dhgroups": [ 00:18:22.506 "null", 00:18:22.506 "ffdhe2048", 00:18:22.506 "ffdhe3072", 00:18:22.506 "ffdhe4096", 00:18:22.506 "ffdhe6144", 00:18:22.506 "ffdhe8192" 00:18:22.506 ] 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "bdev_nvme_set_hotplug", 00:18:22.506 "params": { 00:18:22.506 "period_us": 100000, 00:18:22.506 "enable": false 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "bdev_malloc_create", 00:18:22.506 "params": { 00:18:22.506 "name": "malloc0", 00:18:22.506 "num_blocks": 8192, 00:18:22.506 "block_size": 4096, 00:18:22.506 "physical_block_size": 4096, 00:18:22.506 "uuid": "65407bc7-886d-435e-8079-c95194922e24", 00:18:22.506 "optimal_io_boundary": 0 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "bdev_wait_for_examine" 00:18:22.506 } 00:18:22.506 ] 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "subsystem": "nbd", 00:18:22.506 "config": [] 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "subsystem": "scheduler", 00:18:22.506 "config": [ 00:18:22.506 { 00:18:22.506 "method": "framework_set_scheduler", 00:18:22.506 "params": { 00:18:22.506 "name": "static" 00:18:22.506 } 00:18:22.506 } 00:18:22.506 ] 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "subsystem": "nvmf", 00:18:22.506 "config": [ 00:18:22.506 { 00:18:22.506 "method": "nvmf_set_config", 00:18:22.506 "params": { 00:18:22.506 "discovery_filter": "match_any", 00:18:22.506 "admin_cmd_passthru": { 00:18:22.506 "identify_ctrlr": false 00:18:22.506 } 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "nvmf_set_max_subsystems", 00:18:22.506 "params": { 00:18:22.506 "max_subsystems": 1024 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "nvmf_set_crdt", 00:18:22.506 "params": { 00:18:22.506 "crdt1": 0, 00:18:22.506 "crdt2": 0, 00:18:22.506 "crdt3": 0 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "nvmf_create_transport", 00:18:22.506 "params": { 00:18:22.506 "trtype": "TCP", 00:18:22.506 "max_queue_depth": 128, 00:18:22.506 "max_io_qpairs_per_ctrlr": 127, 00:18:22.506 "in_capsule_data_size": 4096, 00:18:22.506 "max_io_size": 131072, 00:18:22.506 "io_unit_size": 131072, 00:18:22.506 "max_aq_depth": 128, 00:18:22.506 "num_shared_buffers": 511, 00:18:22.506 "buf_cache_size": 4294967295, 00:18:22.506 "dif_insert_or_strip": false, 00:18:22.506 "zcopy": false, 00:18:22.506 "c2h_success": false, 00:18:22.506 "sock_priority": 0, 00:18:22.506 "abort_timeout_sec": 1, 00:18:22.506 "ack_timeout": 0, 00:18:22.506 "data_wr_pool_size": 0 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "nvmf_create_subsystem", 00:18:22.506 "params": { 00:18:22.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.506 "allow_any_host": false, 00:18:22.506 "serial_number": "00000000000000000000", 00:18:22.506 "model_number": "SPDK bdev Controller", 00:18:22.506 "max_namespaces": 32, 00:18:22.506 "min_cntlid": 1, 00:18:22.506 "max_cntlid": 65519, 00:18:22.506 "ana_reporting": false 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "nvmf_subsystem_add_host", 00:18:22.506 "params": { 00:18:22.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.506 "host": "nqn.2016-06.io.spdk:host1", 00:18:22.506 "psk": "key0" 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "nvmf_subsystem_add_ns", 00:18:22.506 "params": { 00:18:22.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.506 "namespace": { 00:18:22.506 "nsid": 1, 00:18:22.506 "bdev_name": "malloc0", 00:18:22.506 "nguid": "65407BC7886D435E8079C95194922E24", 00:18:22.506 "uuid": "65407bc7-886d-435e-8079-c95194922e24", 00:18:22.506 "no_auto_visible": false 00:18:22.506 } 00:18:22.506 } 00:18:22.506 }, 00:18:22.506 { 00:18:22.506 "method": "nvmf_subsystem_add_listener", 00:18:22.506 "params": { 00:18:22.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.506 "listen_address": { 00:18:22.506 "trtype": "TCP", 00:18:22.506 "adrfam": "IPv4", 00:18:22.506 "traddr": "10.0.0.2", 00:18:22.506 "trsvcid": "4420" 00:18:22.506 }, 00:18:22.506 "secure_channel": false, 00:18:22.506 "sock_impl": "ssl" 00:18:22.506 } 00:18:22.506 } 00:18:22.506 ] 00:18:22.506 } 00:18:22.506 ] 00:18:22.506 }' 00:18:22.506 00:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:22.763 00:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:22.763 "subsystems": [ 00:18:22.763 { 00:18:22.763 "subsystem": "keyring", 00:18:22.763 "config": [ 00:18:22.764 { 00:18:22.764 "method": "keyring_file_add_key", 00:18:22.764 "params": { 00:18:22.764 "name": "key0", 00:18:22.764 "path": "/tmp/tmp.ANJewCPW44" 00:18:22.764 } 00:18:22.764 } 00:18:22.764 ] 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "subsystem": "iobuf", 00:18:22.764 "config": [ 00:18:22.764 { 00:18:22.764 "method": "iobuf_set_options", 00:18:22.764 "params": { 00:18:22.764 "small_pool_count": 8192, 00:18:22.764 "large_pool_count": 1024, 00:18:22.764 "small_bufsize": 8192, 00:18:22.764 "large_bufsize": 135168 00:18:22.764 } 00:18:22.764 } 00:18:22.764 ] 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "subsystem": "sock", 00:18:22.764 "config": [ 00:18:22.764 { 00:18:22.764 "method": "sock_set_default_impl", 00:18:22.764 "params": { 00:18:22.764 "impl_name": "posix" 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "sock_impl_set_options", 00:18:22.764 "params": { 00:18:22.764 "impl_name": "ssl", 00:18:22.764 "recv_buf_size": 4096, 00:18:22.764 "send_buf_size": 4096, 00:18:22.764 "enable_recv_pipe": true, 00:18:22.764 "enable_quickack": false, 00:18:22.764 "enable_placement_id": 0, 00:18:22.764 "enable_zerocopy_send_server": true, 00:18:22.764 "enable_zerocopy_send_client": false, 00:18:22.764 "zerocopy_threshold": 0, 00:18:22.764 "tls_version": 0, 00:18:22.764 "enable_ktls": false 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "sock_impl_set_options", 00:18:22.764 "params": { 00:18:22.764 "impl_name": "posix", 00:18:22.764 "recv_buf_size": 2097152, 00:18:22.764 "send_buf_size": 2097152, 00:18:22.764 "enable_recv_pipe": true, 00:18:22.764 "enable_quickack": false, 00:18:22.764 "enable_placement_id": 0, 00:18:22.764 "enable_zerocopy_send_server": true, 00:18:22.764 "enable_zerocopy_send_client": false, 00:18:22.764 "zerocopy_threshold": 0, 00:18:22.764 "tls_version": 0, 00:18:22.764 "enable_ktls": false 00:18:22.764 } 00:18:22.764 } 00:18:22.764 ] 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "subsystem": "vmd", 00:18:22.764 "config": [] 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "subsystem": "accel", 00:18:22.764 "config": [ 00:18:22.764 { 00:18:22.764 "method": "accel_set_options", 00:18:22.764 "params": { 00:18:22.764 "small_cache_size": 128, 00:18:22.764 "large_cache_size": 16, 00:18:22.764 "task_count": 2048, 00:18:22.764 "sequence_count": 2048, 00:18:22.764 "buf_count": 2048 00:18:22.764 } 00:18:22.764 } 00:18:22.764 ] 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "subsystem": "bdev", 00:18:22.764 "config": [ 00:18:22.764 { 00:18:22.764 "method": "bdev_set_options", 00:18:22.764 "params": { 00:18:22.764 "bdev_io_pool_size": 65535, 00:18:22.764 "bdev_io_cache_size": 256, 00:18:22.764 "bdev_auto_examine": true, 00:18:22.764 "iobuf_small_cache_size": 128, 00:18:22.764 "iobuf_large_cache_size": 16 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "bdev_raid_set_options", 00:18:22.764 "params": { 00:18:22.764 "process_window_size_kb": 1024 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "bdev_iscsi_set_options", 00:18:22.764 "params": { 00:18:22.764 "timeout_sec": 30 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "bdev_nvme_set_options", 00:18:22.764 "params": { 00:18:22.764 "action_on_timeout": "none", 00:18:22.764 "timeout_us": 0, 00:18:22.764 "timeout_admin_us": 0, 00:18:22.764 "keep_alive_timeout_ms": 10000, 00:18:22.764 "arbitration_burst": 0, 00:18:22.764 "low_priority_weight": 0, 00:18:22.764 "medium_priority_weight": 0, 00:18:22.764 "high_priority_weight": 0, 00:18:22.764 "nvme_adminq_poll_period_us": 10000, 00:18:22.764 "nvme_ioq_poll_period_us": 0, 00:18:22.764 "io_queue_requests": 512, 00:18:22.764 "delay_cmd_submit": true, 00:18:22.764 "transport_retry_count": 4, 00:18:22.764 "bdev_retry_count": 3, 00:18:22.764 "transport_ack_timeout": 0, 00:18:22.764 "ctrlr_loss_timeout_sec": 0, 00:18:22.764 "reconnect_delay_sec": 0, 00:18:22.764 "fast_io_fail_timeout_sec": 0, 00:18:22.764 "disable_auto_failback": false, 00:18:22.764 "generate_uuids": false, 00:18:22.764 "transport_tos": 0, 00:18:22.764 "nvme_error_stat": false, 00:18:22.764 "rdma_srq_size": 0, 00:18:22.764 "io_path_stat": false, 00:18:22.764 "allow_accel_sequence": false, 00:18:22.764 "rdma_max_cq_size": 0, 00:18:22.764 "rdma_cm_event_timeout_ms": 0, 00:18:22.764 "dhchap_digests": [ 00:18:22.764 "sha256", 00:18:22.764 "sha384", 00:18:22.764 "sha512" 00:18:22.764 ], 00:18:22.764 "dhchap_dhgroups": [ 00:18:22.764 "null", 00:18:22.764 "ffdhe2048", 00:18:22.764 "ffdhe3072", 00:18:22.764 "ffdhe4096", 00:18:22.764 "ffdhe6144", 00:18:22.764 "ffdhe8192" 00:18:22.764 ] 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "bdev_nvme_attach_controller", 00:18:22.764 "params": { 00:18:22.764 "name": "nvme0", 00:18:22.764 "trtype": "TCP", 00:18:22.764 "adrfam": "IPv4", 00:18:22.764 "traddr": "10.0.0.2", 00:18:22.764 "trsvcid": "4420", 00:18:22.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.764 "prchk_reftag": false, 00:18:22.764 "prchk_guard": false, 00:18:22.764 "ctrlr_loss_timeout_sec": 0, 00:18:22.764 "reconnect_delay_sec": 0, 00:18:22.764 "fast_io_fail_timeout_sec": 0, 00:18:22.764 "psk": "key0", 00:18:22.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.764 "hdgst": false, 00:18:22.764 "ddgst": false 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "bdev_nvme_set_hotplug", 00:18:22.764 "params": { 00:18:22.764 "period_us": 100000, 00:18:22.764 "enable": false 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "bdev_enable_histogram", 00:18:22.764 "params": { 00:18:22.764 "name": "nvme0n1", 00:18:22.764 "enable": true 00:18:22.764 } 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "method": "bdev_wait_for_examine" 00:18:22.764 } 00:18:22.764 ] 00:18:22.764 }, 00:18:22.764 { 00:18:22.764 "subsystem": "nbd", 00:18:22.764 "config": [] 00:18:22.764 } 00:18:22.764 ] 00:18:22.764 }' 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2311523 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2311523 ']' 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2311523 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311523 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311523' 00:18:22.764 killing process with pid 2311523 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2311523 00:18:22.764 Received shutdown signal, test time was about 1.000000 seconds 00:18:22.764 00:18:22.764 Latency(us) 00:18:22.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.764 =================================================================================================================== 00:18:22.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.764 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2311523 00:18:23.022 00:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2311370 00:18:23.022 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2311370 ']' 00:18:23.022 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2311370 00:18:23.022 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:23.278 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.278 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311370 00:18:23.278 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:23.278 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:23.278 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311370' 00:18:23.278 killing process with pid 2311370 00:18:23.278 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2311370 00:18:23.278 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2311370 00:18:23.536 00:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:23.536 00:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.536 00:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:23.536 "subsystems": [ 00:18:23.536 { 00:18:23.536 "subsystem": "keyring", 00:18:23.536 "config": [ 00:18:23.536 { 00:18:23.536 "method": "keyring_file_add_key", 00:18:23.536 "params": { 00:18:23.536 "name": "key0", 00:18:23.536 "path": "/tmp/tmp.ANJewCPW44" 00:18:23.536 } 00:18:23.536 } 00:18:23.536 ] 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "subsystem": "iobuf", 00:18:23.536 "config": [ 00:18:23.536 { 00:18:23.536 "method": "iobuf_set_options", 00:18:23.536 "params": { 00:18:23.536 "small_pool_count": 8192, 00:18:23.536 "large_pool_count": 1024, 00:18:23.536 "small_bufsize": 8192, 00:18:23.536 "large_bufsize": 135168 00:18:23.536 } 00:18:23.536 } 00:18:23.536 ] 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "subsystem": "sock", 00:18:23.536 "config": [ 00:18:23.536 { 00:18:23.536 "method": "sock_set_default_impl", 00:18:23.536 "params": { 00:18:23.536 "impl_name": "posix" 00:18:23.536 } 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "method": "sock_impl_set_options", 00:18:23.536 "params": { 00:18:23.536 "impl_name": "ssl", 00:18:23.536 "recv_buf_size": 4096, 00:18:23.536 "send_buf_size": 4096, 00:18:23.536 "enable_recv_pipe": true, 00:18:23.536 "enable_quickack": false, 00:18:23.536 "enable_placement_id": 0, 00:18:23.536 "enable_zerocopy_send_server": true, 00:18:23.536 "enable_zerocopy_send_client": false, 00:18:23.536 "zerocopy_threshold": 0, 00:18:23.536 "tls_version": 0, 00:18:23.536 "enable_ktls": false 00:18:23.536 } 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "method": "sock_impl_set_options", 00:18:23.536 "params": { 00:18:23.536 "impl_name": "posix", 00:18:23.536 "recv_buf_size": 2097152, 00:18:23.536 "send_buf_size": 2097152, 00:18:23.536 "enable_recv_pipe": true, 00:18:23.536 "enable_quickack": false, 00:18:23.536 "enable_placement_id": 0, 00:18:23.536 "enable_zerocopy_send_server": true, 00:18:23.536 "enable_zerocopy_send_client": false, 00:18:23.536 "zerocopy_threshold": 0, 00:18:23.536 "tls_version": 0, 00:18:23.536 "enable_ktls": false 00:18:23.536 } 00:18:23.536 } 00:18:23.536 ] 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "subsystem": "vmd", 00:18:23.536 "config": [] 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "subsystem": "accel", 00:18:23.536 "config": [ 00:18:23.536 { 00:18:23.536 "method": "accel_set_options", 00:18:23.536 "params": { 00:18:23.536 "small_cache_size": 128, 00:18:23.536 "large_cache_size": 16, 00:18:23.536 "task_count": 2048, 00:18:23.536 "sequence_count": 2048, 00:18:23.536 "buf_count": 2048 00:18:23.536 } 00:18:23.536 } 00:18:23.536 ] 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "subsystem": "bdev", 00:18:23.536 "config": [ 00:18:23.536 { 00:18:23.536 "method": "bdev_set_options", 00:18:23.536 "params": { 00:18:23.536 "bdev_io_pool_size": 65535, 00:18:23.536 "bdev_io_cache_size": 256, 00:18:23.536 "bdev_auto_examine": true, 00:18:23.536 "iobuf_small_cache_size": 128, 00:18:23.536 "iobuf_large_cache_size": 16 00:18:23.536 } 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "method": "bdev_raid_set_options", 00:18:23.536 "params": { 00:18:23.536 "process_window_size_kb": 1024 00:18:23.536 } 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "method": "bdev_iscsi_set_options", 00:18:23.536 "params": { 00:18:23.536 "timeout_sec": 30 00:18:23.536 } 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "method": "bdev_nvme_set_options", 00:18:23.536 "params": { 00:18:23.536 "action_on_timeout": "none", 00:18:23.536 "timeout_us": 0, 00:18:23.536 "timeout_admin_us": 0, 00:18:23.536 "keep_alive_timeout_ms": 10000, 00:18:23.536 "arbitration_burst": 0, 00:18:23.536 "low_priority_weight": 0, 00:18:23.536 "medium_priority_weight": 0, 00:18:23.536 "high_priority_weight": 0, 00:18:23.536 "nvme_adminq_poll_period_us": 10000, 00:18:23.536 "nvme_ioq_poll_period_us": 0, 00:18:23.536 "io_queue_requests": 0, 00:18:23.536 "delay_cmd_submit": true, 00:18:23.536 "transport_retry_count": 4, 00:18:23.536 "bdev_retry_count": 3, 00:18:23.536 "transport_ack_timeout": 0, 00:18:23.536 "ctrlr_loss_timeout_sec": 0, 00:18:23.536 "reconnect_delay_sec": 0, 00:18:23.536 "fast_io_fail_timeout_sec": 0, 00:18:23.536 "disable_auto_failback": false, 00:18:23.536 "generate_uuids": false, 00:18:23.536 "transport_tos": 0, 00:18:23.536 "nvme_error_stat": false, 00:18:23.536 "rdma_srq_size": 0, 00:18:23.536 "io_path_stat": false, 00:18:23.536 "allow_accel_sequence": false, 00:18:23.536 "rdma_max_cq_size": 0, 00:18:23.536 "rdma_cm_event_timeout_ms": 0, 00:18:23.536 "dhchap_digests": [ 00:18:23.536 "sha256", 00:18:23.536 "sha384", 00:18:23.536 "sha512" 00:18:23.536 ], 00:18:23.536 "dhchap_dhgroups": [ 00:18:23.536 "null", 00:18:23.536 "ffdhe2048", 00:18:23.536 "ffdhe3072", 00:18:23.536 "ffdhe4096", 00:18:23.536 "ffdhe6144", 00:18:23.536 "ffdhe8192" 00:18:23.536 ] 00:18:23.536 } 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "method": "bdev_nvme_set_hotplug", 00:18:23.536 "params": { 00:18:23.536 "period_us": 100000, 00:18:23.536 "enable": false 00:18:23.536 } 00:18:23.536 }, 00:18:23.536 { 00:18:23.536 "method": "bdev_malloc_create", 00:18:23.537 "params": { 00:18:23.537 "name": "malloc0", 00:18:23.537 "num_blocks": 8192, 00:18:23.537 "block_size": 4096, 00:18:23.537 "physical_block_size": 4096, 00:18:23.537 "uuid": "65407bc7-886d-435e-8079-c95194922e24", 00:18:23.537 "optimal_io_boundary": 0 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "bdev_wait_for_examine" 00:18:23.537 } 00:18:23.537 ] 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "subsystem": "nbd", 00:18:23.537 "config": [] 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "subsystem": "scheduler", 00:18:23.537 "config": [ 00:18:23.537 { 00:18:23.537 "method": "framework_set_scheduler", 00:18:23.537 "params": { 00:18:23.537 "name": "static" 00:18:23.537 } 00:18:23.537 } 00:18:23.537 ] 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "subsystem": "nvmf", 00:18:23.537 "config": [ 00:18:23.537 { 00:18:23.537 "method": "nvmf_set_config", 00:18:23.537 "params": { 00:18:23.537 "discovery_filter": "match_any", 00:18:23.537 "admin_cmd_passthru": { 00:18:23.537 "identify_ctrlr": false 00:18:23.537 } 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "nvmf_set_max_subsystems", 00:18:23.537 "params": { 00:18:23.537 "max_subsystems": 1024 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "nvmf_set_crdt", 00:18:23.537 "params": { 00:18:23.537 "crdt1": 0, 00:18:23.537 "crdt2": 0, 00:18:23.537 "crdt3": 0 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "nvmf_create_transport", 00:18:23.537 "params": { 00:18:23.537 "trtype": "TCP", 00:18:23.537 "max_queue_depth": 128, 00:18:23.537 "max_io_qpairs_per_ctrlr": 127, 00:18:23.537 "in_capsule_data_size": 4096, 00:18:23.537 "max_io_size": 131072, 00:18:23.537 "io_unit_size": 131072, 00:18:23.537 "max_aq_depth": 128, 00:18:23.537 "num_shared_buffers": 511, 00:18:23.537 "buf_cache_size": 4294967295, 00:18:23.537 "dif_insert_or_strip": false, 00:18:23.537 "zcopy": false, 00:18:23.537 "c2h_success": false, 00:18:23.537 "sock_priority": 0, 00:18:23.537 "abort_timeout_sec": 1, 00:18:23.537 "ack_timeout": 0, 00:18:23.537 "data_wr_pool_size": 0 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "nvmf_create_subsystem", 00:18:23.537 "params": { 00:18:23.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.537 "allow_any_host": false, 00:18:23.537 "serial_number": "00000000000000000000", 00:18:23.537 "model_number": "SPDK bdev Controller", 00:18:23.537 "max_namespaces": 32, 00:18:23.537 "min_cntlid": 1, 00:18:23.537 "max_cntlid": 65519, 00:18:23.537 "ana_reporting": false 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "nvmf_subsystem_add_host", 00:18:23.537 "params": { 00:18:23.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.537 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.537 "psk": "key0" 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "nvmf_subsystem_add_ns", 00:18:23.537 "params": { 00:18:23.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.537 "namespace": { 00:18:23.537 "nsid": 1, 00:18:23.537 "bdev_name": "malloc0", 00:18:23.537 "nguid": "65407BC7886D435E8079C95194922E24", 00:18:23.537 "uuid": "65407bc7-886d-435e-8079-c95194922e24", 00:18:23.537 "no_auto_visible": false 00:18:23.537 } 00:18:23.537 } 00:18:23.537 }, 00:18:23.537 { 00:18:23.537 "method": "nvmf_subsystem_add_listener", 00:18:23.537 "params": { 00:18:23.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.537 "listen_address": { 00:18:23.537 "trtype": "TCP", 00:18:23.537 "adrfam": "IPv4", 00:18:23.537 "traddr": "10.0.0.2", 00:18:23.537 "trsvcid": "4420" 00:18:23.537 }, 00:18:23.537 "secure_channel": false, 00:18:23.537 "sock_impl": "ssl" 00:18:23.537 } 00:18:23.537 } 00:18:23.537 ] 00:18:23.537 } 00:18:23.537 ] 00:18:23.537 }' 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2311932 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2311932 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2311932 ']' 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.537 00:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.537 [2024-07-16 00:22:14.986406] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:23.537 [2024-07-16 00:22:14.986485] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.537 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.537 [2024-07-16 00:22:15.048255] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.795 [2024-07-16 00:22:15.153123] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.795 [2024-07-16 00:22:15.153193] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.795 [2024-07-16 00:22:15.153232] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.795 [2024-07-16 00:22:15.153244] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.795 [2024-07-16 00:22:15.153261] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.795 [2024-07-16 00:22:15.153364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.795 [2024-07-16 00:22:15.400527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.054 [2024-07-16 00:22:15.432551] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.054 [2024-07-16 00:22:15.442102] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2312080 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2312080 /var/tmp/bdevperf.sock 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2312080 ']' 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.622 00:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:24.622 "subsystems": [ 00:18:24.622 { 00:18:24.622 "subsystem": "keyring", 00:18:24.622 "config": [ 00:18:24.622 { 00:18:24.622 "method": "keyring_file_add_key", 00:18:24.622 "params": { 00:18:24.622 "name": "key0", 00:18:24.622 "path": "/tmp/tmp.ANJewCPW44" 00:18:24.622 } 00:18:24.622 } 00:18:24.622 ] 00:18:24.622 }, 00:18:24.622 { 00:18:24.622 "subsystem": "iobuf", 00:18:24.622 "config": [ 00:18:24.622 { 00:18:24.622 "method": "iobuf_set_options", 00:18:24.622 "params": { 00:18:24.622 "small_pool_count": 8192, 00:18:24.622 "large_pool_count": 1024, 00:18:24.622 "small_bufsize": 8192, 00:18:24.622 "large_bufsize": 135168 00:18:24.622 } 00:18:24.622 } 00:18:24.622 ] 00:18:24.622 }, 00:18:24.622 { 00:18:24.622 "subsystem": "sock", 00:18:24.622 "config": [ 00:18:24.622 { 00:18:24.622 "method": "sock_set_default_impl", 00:18:24.622 "params": { 00:18:24.622 "impl_name": "posix" 00:18:24.622 } 00:18:24.622 }, 00:18:24.622 { 00:18:24.622 "method": "sock_impl_set_options", 00:18:24.622 "params": { 00:18:24.622 "impl_name": "ssl", 00:18:24.622 "recv_buf_size": 4096, 00:18:24.622 "send_buf_size": 4096, 00:18:24.622 "enable_recv_pipe": true, 00:18:24.622 "enable_quickack": false, 00:18:24.622 "enable_placement_id": 0, 00:18:24.622 "enable_zerocopy_send_server": true, 00:18:24.622 "enable_zerocopy_send_client": false, 00:18:24.622 "zerocopy_threshold": 0, 00:18:24.622 "tls_version": 0, 00:18:24.622 "enable_ktls": false 00:18:24.622 } 00:18:24.622 }, 00:18:24.622 { 00:18:24.622 "method": "sock_impl_set_options", 00:18:24.622 "params": { 00:18:24.622 "impl_name": "posix", 00:18:24.622 "recv_buf_size": 2097152, 00:18:24.622 "send_buf_size": 2097152, 00:18:24.622 "enable_recv_pipe": true, 00:18:24.622 "enable_quickack": false, 00:18:24.622 "enable_placement_id": 0, 00:18:24.622 "enable_zerocopy_send_server": true, 00:18:24.622 "enable_zerocopy_send_client": false, 00:18:24.622 "zerocopy_threshold": 0, 00:18:24.622 "tls_version": 0, 00:18:24.622 "enable_ktls": false 00:18:24.622 } 00:18:24.622 } 00:18:24.622 ] 00:18:24.622 }, 00:18:24.622 { 00:18:24.622 "subsystem": "vmd", 00:18:24.622 "config": [] 00:18:24.622 }, 00:18:24.622 { 00:18:24.622 "subsystem": "accel", 00:18:24.622 "config": [ 00:18:24.622 { 00:18:24.622 "method": "accel_set_options", 00:18:24.622 "params": { 00:18:24.622 "small_cache_size": 128, 00:18:24.622 "large_cache_size": 16, 00:18:24.622 "task_count": 2048, 00:18:24.622 "sequence_count": 2048, 00:18:24.622 "buf_count": 2048 00:18:24.622 } 00:18:24.622 } 00:18:24.622 ] 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "subsystem": "bdev", 00:18:24.623 "config": [ 00:18:24.623 { 00:18:24.623 "method": "bdev_set_options", 00:18:24.623 "params": { 00:18:24.623 "bdev_io_pool_size": 65535, 00:18:24.623 "bdev_io_cache_size": 256, 00:18:24.623 "bdev_auto_examine": true, 00:18:24.623 "iobuf_small_cache_size": 128, 00:18:24.623 "iobuf_large_cache_size": 16 00:18:24.623 } 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "method": "bdev_raid_set_options", 00:18:24.623 "params": { 00:18:24.623 "process_window_size_kb": 1024 00:18:24.623 } 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "method": "bdev_iscsi_set_options", 00:18:24.623 "params": { 00:18:24.623 "timeout_sec": 30 00:18:24.623 } 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "method": "bdev_nvme_set_options", 00:18:24.623 "params": { 00:18:24.623 "action_on_timeout": "none", 00:18:24.623 "timeout_us": 0, 00:18:24.623 "timeout_admin_us": 0, 00:18:24.623 "keep_alive_timeout_ms": 10000, 00:18:24.623 "arbitration_burst": 0, 00:18:24.623 "low_priority_weight": 0, 00:18:24.623 "medium_priority_weight": 0, 00:18:24.623 "high_priority_weight": 0, 00:18:24.623 "nvme_adminq_poll_period_us": 10000, 00:18:24.623 "nvme_ioq_poll_period_us": 0, 00:18:24.623 "io_queue_requests": 512, 00:18:24.623 "delay_cmd_submit": true, 00:18:24.623 "transport_retry_count": 4, 00:18:24.623 "bdev_retry_count": 3, 00:18:24.623 "transport_ack_timeout": 0, 00:18:24.623 "ctrlr_loss_timeout_sec": 0, 00:18:24.623 "reconnect_delay_sec": 0, 00:18:24.623 "fast_io_fail_timeout_sec": 0, 00:18:24.623 "disable_auto_failback": false, 00:18:24.623 "generate_uuids": false, 00:18:24.623 "transport_tos": 0, 00:18:24.623 "nvme_error_stat": false, 00:18:24.623 "rdma_srq_size": 0, 00:18:24.623 "io_path_stat": false, 00:18:24.623 "allow_accel_sequence": false, 00:18:24.623 "rdma_max_cq_size": 0, 00:18:24.623 "rdma_cm_event_timeout_ms": 0, 00:18:24.623 "dhchap_digests": [ 00:18:24.623 "sha256", 00:18:24.623 "sha384", 00:18:24.623 "sh 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.623 a512" 00:18:24.623 ], 00:18:24.623 "dhchap_dhgroups": [ 00:18:24.623 "null", 00:18:24.623 "ffdhe2048", 00:18:24.623 "ffdhe3072", 00:18:24.623 "ffdhe4096", 00:18:24.623 "ffdhe6144", 00:18:24.623 "ffdhe8192" 00:18:24.623 ] 00:18:24.623 } 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "method": "bdev_nvme_attach_controller", 00:18:24.623 "params": { 00:18:24.623 "name": "nvme0", 00:18:24.623 "trtype": "TCP", 00:18:24.623 "adrfam": "IPv4", 00:18:24.623 "traddr": "10.0.0.2", 00:18:24.623 "trsvcid": "4420", 00:18:24.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.623 "prchk_reftag": false, 00:18:24.623 "prchk_guard": false, 00:18:24.623 "ctrlr_loss_timeout_sec": 0, 00:18:24.623 "reconnect_delay_sec": 0, 00:18:24.623 "fast_io_fail_timeout_sec": 0, 00:18:24.623 "psk": "key0", 00:18:24.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.623 "hdgst": false, 00:18:24.623 "ddgst": false 00:18:24.623 } 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "method": "bdev_nvme_set_hotplug", 00:18:24.623 "params": { 00:18:24.623 "period_us": 100000, 00:18:24.623 "enable": false 00:18:24.623 } 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "method": "bdev_enable_histogram", 00:18:24.623 "params": { 00:18:24.623 "name": "nvme0n1", 00:18:24.623 "enable": true 00:18:24.623 } 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "method": "bdev_wait_for_examine" 00:18:24.623 } 00:18:24.623 ] 00:18:24.623 }, 00:18:24.623 { 00:18:24.623 "subsystem": "nbd", 00:18:24.623 "config": [] 00:18:24.623 } 00:18:24.623 ] 00:18:24.623 }' 00:18:24.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.623 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.623 00:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.623 [2024-07-16 00:22:16.012173] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:24.623 [2024-07-16 00:22:16.012261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312080 ] 00:18:24.623 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.623 [2024-07-16 00:22:16.072869] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.623 [2024-07-16 00:22:16.188613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.882 [2024-07-16 00:22:16.372469] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.451 00:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.451 00:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:25.451 00:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:25.451 00:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:25.709 00:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.709 00:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.968 Running I/O for 1 seconds... 00:18:26.916 00:18:26.916 Latency(us) 00:18:26.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.916 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:26.916 Verification LBA range: start 0x0 length 0x2000 00:18:26.916 nvme0n1 : 1.06 1727.34 6.75 0.00 0.00 72273.38 6553.60 111071.38 00:18:26.916 =================================================================================================================== 00:18:26.916 Total : 1727.34 6.75 0.00 0.00 72273.38 6553.60 111071.38 00:18:26.916 0 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:26.916 nvmf_trace.0 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2312080 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2312080 ']' 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2312080 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.916 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2312080 00:18:26.917 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:26.917 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:26.917 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2312080' 00:18:26.917 killing process with pid 2312080 00:18:26.917 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2312080 00:18:26.917 Received shutdown signal, test time was about 1.000000 seconds 00:18:26.917 00:18:26.917 Latency(us) 00:18:26.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.917 =================================================================================================================== 00:18:26.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.917 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2312080 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.227 rmmod nvme_tcp 00:18:27.227 rmmod nvme_fabrics 00:18:27.227 rmmod nvme_keyring 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2311932 ']' 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2311932 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2311932 ']' 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2311932 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.227 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.486 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311932 00:18:27.486 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:27.486 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:27.486 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311932' 00:18:27.486 killing process with pid 2311932 00:18:27.486 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2311932 00:18:27.486 00:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2311932 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.747 00:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.653 00:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:29.653 00:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sp60pSuG6w /tmp/tmp.6ckTY2OL7W /tmp/tmp.ANJewCPW44 00:18:29.653 00:18:29.653 real 1m22.062s 00:18:29.653 user 2m11.132s 00:18:29.653 sys 0m28.021s 00:18:29.653 00:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:29.653 00:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.653 ************************************ 00:18:29.653 END TEST nvmf_tls 00:18:29.653 ************************************ 00:18:29.653 00:22:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:29.653 00:22:21 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:29.653 00:22:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:29.653 00:22:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.653 00:22:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:29.653 ************************************ 00:18:29.653 START TEST nvmf_fips 00:18:29.653 ************************************ 00:18:29.653 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:29.912 * Looking for test storage... 00:18:29.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:29.912 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:29.913 Error setting digest 00:18:29.913 00024A10FD7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:29.913 00024A10FD7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:29.913 00:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:31.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:31.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:31.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:31.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:18:31.823 00:18:31.823 --- 10.0.0.2 ping statistics --- 00:18:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.823 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:18:31.823 00:18:31.823 --- 10.0.0.1 ping statistics --- 00:18:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.823 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2314324 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2314324 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2314324 ']' 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.823 00:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:32.086 [2024-07-16 00:22:23.483562] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:32.086 [2024-07-16 00:22:23.483655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.086 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.086 [2024-07-16 00:22:23.546665] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.086 [2024-07-16 00:22:23.655334] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.086 [2024-07-16 00:22:23.655405] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.086 [2024-07-16 00:22:23.655442] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.086 [2024-07-16 00:22:23.655453] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.086 [2024-07-16 00:22:23.655462] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.086 [2024-07-16 00:22:23.655494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.021 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:33.280 [2024-07-16 00:22:24.748570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.280 [2024-07-16 00:22:24.764579] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.280 [2024-07-16 00:22:24.764784] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.280 [2024-07-16 00:22:24.796852] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:33.280 malloc0 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2314573 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2314573 /var/tmp/bdevperf.sock 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2314573 ']' 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.280 00:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:33.538 [2024-07-16 00:22:24.897511] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:33.538 [2024-07-16 00:22:24.897598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314573 ] 00:18:33.538 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.538 [2024-07-16 00:22:24.959625] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.538 [2024-07-16 00:22:25.066171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.469 00:22:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.469 00:22:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:34.469 00:22:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:34.469 [2024-07-16 00:22:26.042485] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.469 [2024-07-16 00:22:26.042605] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:34.729 TLSTESTn1 00:18:34.729 00:22:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.729 Running I/O for 10 seconds... 00:18:44.708 00:18:44.708 Latency(us) 00:18:44.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.708 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:44.708 Verification LBA range: start 0x0 length 0x2000 00:18:44.708 TLSTESTn1 : 10.06 1844.79 7.21 0.00 0.00 69180.10 6019.60 100973.99 00:18:44.708 =================================================================================================================== 00:18:44.708 Total : 1844.79 7.21 0.00 0.00 69180.10 6019.60 100973.99 00:18:44.708 0 00:18:44.967 00:22:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:44.967 00:22:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:44.967 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:44.967 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:44.967 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:44.967 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:44.967 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:44.968 nvmf_trace.0 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2314573 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2314573 ']' 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2314573 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2314573 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2314573' 00:18:44.968 killing process with pid 2314573 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2314573 00:18:44.968 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.968 00:18:44.968 Latency(us) 00:18:44.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.968 =================================================================================================================== 00:18:44.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.968 [2024-07-16 00:22:36.434392] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:44.968 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2314573 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.227 rmmod nvme_tcp 00:18:45.227 rmmod nvme_fabrics 00:18:45.227 rmmod nvme_keyring 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2314324 ']' 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2314324 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2314324 ']' 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2314324 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2314324 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2314324' 00:18:45.227 killing process with pid 2314324 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2314324 00:18:45.227 [2024-07-16 00:22:36.771234] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:45.227 00:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2314324 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.484 00:22:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.019 00:22:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.019 00:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.019 00:18:48.019 real 0m17.862s 00:18:48.019 user 0m22.818s 00:18:48.019 sys 0m6.604s 00:18:48.019 00:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:48.019 00:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:48.019 ************************************ 00:18:48.019 END TEST nvmf_fips 00:18:48.019 ************************************ 00:18:48.019 00:22:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:48.019 00:22:39 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:48.019 00:22:39 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:48.019 00:22:39 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:48.019 00:22:39 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:48.019 00:22:39 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.019 00:22:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:49.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:49.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:49.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:49.921 00:22:41 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:49.921 00:22:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.921 00:22:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.921 00:22:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.921 ************************************ 00:18:49.921 START TEST nvmf_perf_adq 00:18:49.921 ************************************ 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:49.922 * Looking for test storage... 00:18:49.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.922 00:22:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:51.885 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:51.885 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:51.885 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:51.885 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:51.885 00:22:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:52.144 00:22:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:54.673 00:22:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:59.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:18:59.951 00:18:59.951 --- 10.0.0.2 ping statistics --- 00:18:59.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.951 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:18:59.951 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:18:59.951 00:18:59.952 --- 10.0.0.1 ping statistics --- 00:18:59.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.952 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2320356 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2320356 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2320356 ']' 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.952 00:22:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 [2024-07-16 00:22:50.902989] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:18:59.952 [2024-07-16 00:22:50.903064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.952 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.952 [2024-07-16 00:22:50.970474] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.952 [2024-07-16 00:22:51.080330] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.952 [2024-07-16 00:22:51.080385] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.952 [2024-07-16 00:22:51.080413] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.952 [2024-07-16 00:22:51.080424] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.952 [2024-07-16 00:22:51.080433] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.952 [2024-07-16 00:22:51.080587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.952 [2024-07-16 00:22:51.080650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.952 [2024-07-16 00:22:51.080736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.952 [2024-07-16 00:22:51.080733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 [2024-07-16 00:22:51.286533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 Malloc1 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.952 [2024-07-16 00:22:51.337082] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2320503 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:59.952 00:22:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:59.952 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:01.851 "tick_rate": 2700000000, 00:19:01.851 "poll_groups": [ 00:19:01.851 { 00:19:01.851 "name": "nvmf_tgt_poll_group_000", 00:19:01.851 "admin_qpairs": 1, 00:19:01.851 "io_qpairs": 1, 00:19:01.851 "current_admin_qpairs": 1, 00:19:01.851 "current_io_qpairs": 1, 00:19:01.851 "pending_bdev_io": 0, 00:19:01.851 "completed_nvme_io": 20700, 00:19:01.851 "transports": [ 00:19:01.851 { 00:19:01.851 "trtype": "TCP" 00:19:01.851 } 00:19:01.851 ] 00:19:01.851 }, 00:19:01.851 { 00:19:01.851 "name": "nvmf_tgt_poll_group_001", 00:19:01.851 "admin_qpairs": 0, 00:19:01.851 "io_qpairs": 1, 00:19:01.851 "current_admin_qpairs": 0, 00:19:01.851 "current_io_qpairs": 1, 00:19:01.851 "pending_bdev_io": 0, 00:19:01.851 "completed_nvme_io": 19934, 00:19:01.851 "transports": [ 00:19:01.851 { 00:19:01.851 "trtype": "TCP" 00:19:01.851 } 00:19:01.851 ] 00:19:01.851 }, 00:19:01.851 { 00:19:01.851 "name": "nvmf_tgt_poll_group_002", 00:19:01.851 "admin_qpairs": 0, 00:19:01.851 "io_qpairs": 1, 00:19:01.851 "current_admin_qpairs": 0, 00:19:01.851 "current_io_qpairs": 1, 00:19:01.851 "pending_bdev_io": 0, 00:19:01.851 "completed_nvme_io": 18938, 00:19:01.851 "transports": [ 00:19:01.851 { 00:19:01.851 "trtype": "TCP" 00:19:01.851 } 00:19:01.851 ] 00:19:01.851 }, 00:19:01.851 { 00:19:01.851 "name": "nvmf_tgt_poll_group_003", 00:19:01.851 "admin_qpairs": 0, 00:19:01.851 "io_qpairs": 1, 00:19:01.851 "current_admin_qpairs": 0, 00:19:01.851 "current_io_qpairs": 1, 00:19:01.851 "pending_bdev_io": 0, 00:19:01.851 "completed_nvme_io": 20501, 00:19:01.851 "transports": [ 00:19:01.851 { 00:19:01.851 "trtype": "TCP" 00:19:01.851 } 00:19:01.851 ] 00:19:01.851 } 00:19:01.851 ] 00:19:01.851 }' 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:01.851 00:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2320503 00:19:09.966 Initializing NVMe Controllers 00:19:09.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:09.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:09.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:09.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:09.966 Initialization complete. Launching workers. 00:19:09.966 ======================================================== 00:19:09.967 Latency(us) 00:19:09.967 Device Information : IOPS MiB/s Average min max 00:19:09.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10575.06 41.31 6052.72 1455.37 9687.58 00:19:09.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10227.56 39.95 6257.11 2552.93 9229.75 00:19:09.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9671.96 37.78 6617.32 1679.74 11848.38 00:19:09.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10667.66 41.67 5999.21 2131.50 8691.64 00:19:09.967 ======================================================== 00:19:09.967 Total : 41142.23 160.71 6222.39 1455.37 11848.38 00:19:09.967 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.967 rmmod nvme_tcp 00:19:09.967 rmmod nvme_fabrics 00:19:09.967 rmmod nvme_keyring 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2320356 ']' 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2320356 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2320356 ']' 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2320356 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2320356 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2320356' 00:19:09.967 killing process with pid 2320356 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2320356 00:19:09.967 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2320356 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.226 00:23:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.756 00:23:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.757 00:23:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:12.757 00:23:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:13.015 00:23:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:14.915 00:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:20.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:20.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:20.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:20.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.227 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:19:20.228 00:19:20.228 --- 10.0.0.2 ping statistics --- 00:19:20.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.228 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:19:20.228 00:19:20.228 --- 10.0.0.1 ping statistics --- 00:19:20.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.228 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:20.228 net.core.busy_poll = 1 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:20.228 net.core.busy_read = 1 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2323127 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2323127 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2323127 ']' 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.228 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.228 [2024-07-16 00:23:11.742129] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:20.228 [2024-07-16 00:23:11.742229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.228 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.228 [2024-07-16 00:23:11.805481] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.486 [2024-07-16 00:23:11.913097] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.486 [2024-07-16 00:23:11.913151] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.486 [2024-07-16 00:23:11.913179] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.486 [2024-07-16 00:23:11.913190] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.486 [2024-07-16 00:23:11.913200] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.486 [2024-07-16 00:23:11.913257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.486 [2024-07-16 00:23:11.913318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.486 [2024-07-16 00:23:11.913384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.486 [2024-07-16 00:23:11.913386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.486 00:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.486 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.743 [2024-07-16 00:23:12.122494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.743 Malloc1 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.743 [2024-07-16 00:23:12.173467] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.743 00:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.744 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2323156 00:19:20.744 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:20.744 00:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:20.744 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.674 00:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:22.674 00:23:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.674 00:23:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.674 00:23:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.674 00:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:22.674 "tick_rate": 2700000000, 00:19:22.674 "poll_groups": [ 00:19:22.674 { 00:19:22.674 "name": "nvmf_tgt_poll_group_000", 00:19:22.674 "admin_qpairs": 1, 00:19:22.674 "io_qpairs": 3, 00:19:22.674 "current_admin_qpairs": 1, 00:19:22.674 "current_io_qpairs": 3, 00:19:22.674 "pending_bdev_io": 0, 00:19:22.674 "completed_nvme_io": 27074, 00:19:22.674 "transports": [ 00:19:22.674 { 00:19:22.674 "trtype": "TCP" 00:19:22.674 } 00:19:22.674 ] 00:19:22.674 }, 00:19:22.674 { 00:19:22.674 "name": "nvmf_tgt_poll_group_001", 00:19:22.674 "admin_qpairs": 0, 00:19:22.674 "io_qpairs": 1, 00:19:22.674 "current_admin_qpairs": 0, 00:19:22.674 "current_io_qpairs": 1, 00:19:22.674 "pending_bdev_io": 0, 00:19:22.674 "completed_nvme_io": 24352, 00:19:22.674 "transports": [ 00:19:22.674 { 00:19:22.674 "trtype": "TCP" 00:19:22.674 } 00:19:22.674 ] 00:19:22.674 }, 00:19:22.674 { 00:19:22.674 "name": "nvmf_tgt_poll_group_002", 00:19:22.674 "admin_qpairs": 0, 00:19:22.674 "io_qpairs": 0, 00:19:22.674 "current_admin_qpairs": 0, 00:19:22.674 "current_io_qpairs": 0, 00:19:22.674 "pending_bdev_io": 0, 00:19:22.674 "completed_nvme_io": 0, 00:19:22.674 "transports": [ 00:19:22.674 { 00:19:22.674 "trtype": "TCP" 00:19:22.674 } 00:19:22.674 ] 00:19:22.674 }, 00:19:22.674 { 00:19:22.674 "name": "nvmf_tgt_poll_group_003", 00:19:22.674 "admin_qpairs": 0, 00:19:22.674 "io_qpairs": 0, 00:19:22.674 "current_admin_qpairs": 0, 00:19:22.674 "current_io_qpairs": 0, 00:19:22.674 "pending_bdev_io": 0, 00:19:22.674 "completed_nvme_io": 0, 00:19:22.674 "transports": [ 00:19:22.674 { 00:19:22.674 "trtype": "TCP" 00:19:22.674 } 00:19:22.674 ] 00:19:22.674 } 00:19:22.674 ] 00:19:22.675 }' 00:19:22.675 00:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:22.675 00:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:22.675 00:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:22.675 00:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:22.675 00:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2323156 00:19:30.778 Initializing NVMe Controllers 00:19:30.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:30.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:30.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:30.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:30.778 Initialization complete. Launching workers. 00:19:30.778 ======================================================== 00:19:30.778 Latency(us) 00:19:30.778 Device Information : IOPS MiB/s Average min max 00:19:30.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4533.00 17.71 14124.15 2011.09 63158.56 00:19:30.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4676.10 18.27 13748.81 1926.54 61459.90 00:19:30.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12658.90 49.45 5055.56 1448.72 7671.32 00:19:30.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4700.20 18.36 13662.14 2008.25 60038.34 00:19:30.779 ======================================================== 00:19:30.779 Total : 26568.20 103.78 9655.46 1448.72 63158.56 00:19:30.779 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.779 rmmod nvme_tcp 00:19:30.779 rmmod nvme_fabrics 00:19:30.779 rmmod nvme_keyring 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2323127 ']' 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2323127 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2323127 ']' 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2323127 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.779 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323127 00:19:31.037 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:31.037 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:31.037 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323127' 00:19:31.037 killing process with pid 2323127 00:19:31.037 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2323127 00:19:31.037 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2323127 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.296 00:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.583 00:23:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.583 00:23:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:34.583 00:19:34.583 real 0m44.604s 00:19:34.583 user 2m35.529s 00:19:34.583 sys 0m10.798s 00:19:34.583 00:23:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.583 00:23:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:34.583 ************************************ 00:19:34.583 END TEST nvmf_perf_adq 00:19:34.583 ************************************ 00:19:34.583 00:23:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:34.583 00:23:25 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:34.583 00:23:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:34.583 00:23:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.583 00:23:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.583 ************************************ 00:19:34.583 START TEST nvmf_shutdown 00:19:34.583 ************************************ 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:34.583 * Looking for test storage... 00:19:34.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.583 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:34.584 ************************************ 00:19:34.584 START TEST nvmf_shutdown_tc1 00:19:34.584 ************************************ 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.584 00:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.484 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.484 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.484 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:36.485 00:19:36.485 --- 10.0.0.2 ping statistics --- 00:19:36.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.485 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:19:36.485 00:19:36.485 --- 10.0.0.1 ping statistics --- 00:19:36.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.485 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2326444 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2326444 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2326444 ']' 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.485 00:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.485 [2024-07-16 00:23:27.991246] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:36.485 [2024-07-16 00:23:27.991319] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.485 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.485 [2024-07-16 00:23:28.056794] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.744 [2024-07-16 00:23:28.169333] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.744 [2024-07-16 00:23:28.169396] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.744 [2024-07-16 00:23:28.169424] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.744 [2024-07-16 00:23:28.169435] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.744 [2024-07-16 00:23:28.169445] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.744 [2024-07-16 00:23:28.169581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.744 [2024-07-16 00:23:28.169643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.744 [2024-07-16 00:23:28.169681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:36.744 [2024-07-16 00:23:28.169684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.744 [2024-07-16 00:23:28.321625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.744 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.002 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.002 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.002 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.002 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.002 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:37.002 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.002 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.002 Malloc1 00:19:37.002 [2024-07-16 00:23:28.410676] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.002 Malloc2 00:19:37.002 Malloc3 00:19:37.002 Malloc4 00:19:37.002 Malloc5 00:19:37.261 Malloc6 00:19:37.261 Malloc7 00:19:37.261 Malloc8 00:19:37.261 Malloc9 00:19:37.261 Malloc10 00:19:37.261 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.261 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:37.261 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.261 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2326623 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2326623 /var/tmp/bdevperf.sock 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2326623 ']' 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.520 )") 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.520 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.520 { 00:19:37.520 "params": { 00:19:37.520 "name": "Nvme$subsystem", 00:19:37.520 "trtype": "$TEST_TRANSPORT", 00:19:37.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.520 "adrfam": "ipv4", 00:19:37.520 "trsvcid": "$NVMF_PORT", 00:19:37.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.520 "hdgst": ${hdgst:-false}, 00:19:37.520 "ddgst": ${ddgst:-false} 00:19:37.520 }, 00:19:37.520 "method": "bdev_nvme_attach_controller" 00:19:37.520 } 00:19:37.520 EOF 00:19:37.521 )") 00:19:37.521 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:37.521 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:37.521 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:37.521 00:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme1", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme2", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme3", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme4", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme5", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme6", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme7", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme8", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme9", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 },{ 00:19:37.521 "params": { 00:19:37.521 "name": "Nvme10", 00:19:37.521 "trtype": "tcp", 00:19:37.521 "traddr": "10.0.0.2", 00:19:37.521 "adrfam": "ipv4", 00:19:37.521 "trsvcid": "4420", 00:19:37.521 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:37.521 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:37.521 "hdgst": false, 00:19:37.521 "ddgst": false 00:19:37.521 }, 00:19:37.521 "method": "bdev_nvme_attach_controller" 00:19:37.521 }' 00:19:37.521 [2024-07-16 00:23:28.934344] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:37.521 [2024-07-16 00:23:28.934416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:37.521 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.521 [2024-07-16 00:23:28.997466] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.521 [2024-07-16 00:23:29.107358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2326623 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:39.423 00:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:40.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2326623 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:40.382 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2326444 00:19:40.382 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:40.382 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:40.382 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:40.382 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:40.382 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.382 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.382 { 00:19:40.382 "params": { 00:19:40.382 "name": "Nvme$subsystem", 00:19:40.382 "trtype": "$TEST_TRANSPORT", 00:19:40.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.382 "adrfam": "ipv4", 00:19:40.382 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.383 { 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme$subsystem", 00:19:40.383 "trtype": "$TEST_TRANSPORT", 00:19:40.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "$NVMF_PORT", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.383 "hdgst": ${hdgst:-false}, 00:19:40.383 "ddgst": ${ddgst:-false} 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 } 00:19:40.383 EOF 00:19:40.383 )") 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:40.383 00:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme1", 00:19:40.383 "trtype": "tcp", 00:19:40.383 "traddr": "10.0.0.2", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "4420", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.383 "hdgst": false, 00:19:40.383 "ddgst": false 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 },{ 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme2", 00:19:40.383 "trtype": "tcp", 00:19:40.383 "traddr": "10.0.0.2", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "4420", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:40.383 "hdgst": false, 00:19:40.383 "ddgst": false 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 },{ 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme3", 00:19:40.383 "trtype": "tcp", 00:19:40.383 "traddr": "10.0.0.2", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "4420", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:40.383 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:40.383 "hdgst": false, 00:19:40.383 "ddgst": false 00:19:40.383 }, 00:19:40.383 "method": "bdev_nvme_attach_controller" 00:19:40.383 },{ 00:19:40.383 "params": { 00:19:40.383 "name": "Nvme4", 00:19:40.383 "trtype": "tcp", 00:19:40.383 "traddr": "10.0.0.2", 00:19:40.383 "adrfam": "ipv4", 00:19:40.383 "trsvcid": "4420", 00:19:40.383 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:40.384 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:40.384 "hdgst": false, 00:19:40.384 "ddgst": false 00:19:40.384 }, 00:19:40.384 "method": "bdev_nvme_attach_controller" 00:19:40.384 },{ 00:19:40.384 "params": { 00:19:40.384 "name": "Nvme5", 00:19:40.384 "trtype": "tcp", 00:19:40.384 "traddr": "10.0.0.2", 00:19:40.384 "adrfam": "ipv4", 00:19:40.384 "trsvcid": "4420", 00:19:40.384 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:40.384 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:40.384 "hdgst": false, 00:19:40.384 "ddgst": false 00:19:40.384 }, 00:19:40.384 "method": "bdev_nvme_attach_controller" 00:19:40.384 },{ 00:19:40.384 "params": { 00:19:40.384 "name": "Nvme6", 00:19:40.384 "trtype": "tcp", 00:19:40.384 "traddr": "10.0.0.2", 00:19:40.384 "adrfam": "ipv4", 00:19:40.384 "trsvcid": "4420", 00:19:40.384 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:40.384 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:40.384 "hdgst": false, 00:19:40.384 "ddgst": false 00:19:40.384 }, 00:19:40.384 "method": "bdev_nvme_attach_controller" 00:19:40.384 },{ 00:19:40.384 "params": { 00:19:40.384 "name": "Nvme7", 00:19:40.384 "trtype": "tcp", 00:19:40.384 "traddr": "10.0.0.2", 00:19:40.384 "adrfam": "ipv4", 00:19:40.384 "trsvcid": "4420", 00:19:40.384 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:40.384 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:40.384 "hdgst": false, 00:19:40.384 "ddgst": false 00:19:40.384 }, 00:19:40.384 "method": "bdev_nvme_attach_controller" 00:19:40.384 },{ 00:19:40.384 "params": { 00:19:40.384 "name": "Nvme8", 00:19:40.384 "trtype": "tcp", 00:19:40.384 "traddr": "10.0.0.2", 00:19:40.384 "adrfam": "ipv4", 00:19:40.384 "trsvcid": "4420", 00:19:40.384 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:40.384 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:40.384 "hdgst": false, 00:19:40.384 "ddgst": false 00:19:40.384 }, 00:19:40.384 "method": "bdev_nvme_attach_controller" 00:19:40.384 },{ 00:19:40.384 "params": { 00:19:40.384 "name": "Nvme9", 00:19:40.384 "trtype": "tcp", 00:19:40.384 "traddr": "10.0.0.2", 00:19:40.384 "adrfam": "ipv4", 00:19:40.384 "trsvcid": "4420", 00:19:40.384 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:40.384 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:40.384 "hdgst": false, 00:19:40.384 "ddgst": false 00:19:40.384 }, 00:19:40.384 "method": "bdev_nvme_attach_controller" 00:19:40.384 },{ 00:19:40.384 "params": { 00:19:40.384 "name": "Nvme10", 00:19:40.384 "trtype": "tcp", 00:19:40.384 "traddr": "10.0.0.2", 00:19:40.384 "adrfam": "ipv4", 00:19:40.384 "trsvcid": "4420", 00:19:40.384 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:40.384 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:40.384 "hdgst": false, 00:19:40.384 "ddgst": false 00:19:40.384 }, 00:19:40.384 "method": "bdev_nvme_attach_controller" 00:19:40.384 }' 00:19:40.384 [2024-07-16 00:23:31.989090] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:40.384 [2024-07-16 00:23:31.989188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327047 ] 00:19:40.641 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.642 [2024-07-16 00:23:32.053531] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.642 [2024-07-16 00:23:32.167559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.012 Running I/O for 1 seconds... 00:19:43.382 00:19:43.382 Latency(us) 00:19:43.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.382 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.382 Verification LBA range: start 0x0 length 0x400 00:19:43.382 Nvme1n1 : 1.16 220.98 13.81 0.00 0.00 286661.21 21942.42 271853.04 00:19:43.382 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.382 Verification LBA range: start 0x0 length 0x400 00:19:43.382 Nvme2n1 : 1.13 225.58 14.10 0.00 0.00 275182.36 18641.35 264085.81 00:19:43.382 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.382 Verification LBA range: start 0x0 length 0x400 00:19:43.382 Nvme3n1 : 1.14 168.30 10.52 0.00 0.00 364478.26 22427.88 318456.41 00:19:43.382 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.382 Verification LBA range: start 0x0 length 0x400 00:19:43.382 Nvme4n1 : 1.15 222.53 13.91 0.00 0.00 271116.71 22039.51 273406.48 00:19:43.382 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.382 Verification LBA range: start 0x0 length 0x400 00:19:43.382 Nvme5n1 : 1.17 217.97 13.62 0.00 0.00 272456.44 20874.43 270299.59 00:19:43.383 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.383 Verification LBA range: start 0x0 length 0x400 00:19:43.383 Nvme6n1 : 1.17 218.95 13.68 0.00 0.00 266760.53 22622.06 268746.15 00:19:43.383 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.383 Verification LBA range: start 0x0 length 0x400 00:19:43.383 Nvme7n1 : 1.17 219.48 13.72 0.00 0.00 261530.36 21554.06 268746.15 00:19:43.383 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.383 Verification LBA range: start 0x0 length 0x400 00:19:43.383 Nvme8n1 : 1.15 221.79 13.86 0.00 0.00 254192.26 21748.24 262532.36 00:19:43.383 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.383 Verification LBA range: start 0x0 length 0x400 00:19:43.383 Nvme9n1 : 1.18 273.62 17.10 0.00 0.00 203181.08 19418.07 265639.25 00:19:43.383 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.383 Verification LBA range: start 0x0 length 0x400 00:19:43.383 Nvme10n1 : 1.18 217.31 13.58 0.00 0.00 251259.64 22524.97 279620.27 00:19:43.383 =================================================================================================================== 00:19:43.383 Total : 2206.51 137.91 0.00 0.00 266550.44 18641.35 318456.41 00:19:43.383 00:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:43.383 00:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:43.640 00:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:43.640 00:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:43.640 00:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:43.640 00:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.640 00:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.640 rmmod nvme_tcp 00:19:43.640 rmmod nvme_fabrics 00:19:43.640 rmmod nvme_keyring 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2326444 ']' 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2326444 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2326444 ']' 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2326444 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2326444 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2326444' 00:19:43.640 killing process with pid 2326444 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2326444 00:19:43.640 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2326444 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.207 00:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:46.107 00:19:46.107 real 0m11.798s 00:19:46.107 user 0m34.150s 00:19:46.107 sys 0m3.217s 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.107 ************************************ 00:19:46.107 END TEST nvmf_shutdown_tc1 00:19:46.107 ************************************ 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.107 00:23:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:46.365 ************************************ 00:19:46.365 START TEST nvmf_shutdown_tc2 00:19:46.365 ************************************ 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:46.365 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:46.365 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:46.365 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:46.365 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:46.365 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:46.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:19:46.366 00:19:46.366 --- 10.0.0.2 ping statistics --- 00:19:46.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.366 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:19:46.366 00:19:46.366 --- 10.0.0.1 ping statistics --- 00:19:46.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.366 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2327811 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2327811 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2327811 ']' 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.366 00:23:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.366 [2024-07-16 00:23:37.942227] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:46.366 [2024-07-16 00:23:37.942300] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.366 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.623 [2024-07-16 00:23:38.013998] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.623 [2024-07-16 00:23:38.131459] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.623 [2024-07-16 00:23:38.131517] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.623 [2024-07-16 00:23:38.131532] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.623 [2024-07-16 00:23:38.131545] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.623 [2024-07-16 00:23:38.131556] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.623 [2024-07-16 00:23:38.131639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.623 [2024-07-16 00:23:38.131677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.623 [2024-07-16 00:23:38.131756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:46.623 [2024-07-16 00:23:38.131759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.881 [2024-07-16 00:23:38.270530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.881 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.881 Malloc1 00:19:46.881 [2024-07-16 00:23:38.345959] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.881 Malloc2 00:19:46.881 Malloc3 00:19:46.881 Malloc4 00:19:47.139 Malloc5 00:19:47.139 Malloc6 00:19:47.139 Malloc7 00:19:47.139 Malloc8 00:19:47.139 Malloc9 00:19:47.398 Malloc10 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2327990 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2327990 /var/tmp/bdevperf.sock 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2327990 ']' 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.398 { 00:19:47.398 "params": { 00:19:47.398 "name": "Nvme$subsystem", 00:19:47.398 "trtype": "$TEST_TRANSPORT", 00:19:47.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.398 "adrfam": "ipv4", 00:19:47.398 "trsvcid": "$NVMF_PORT", 00:19:47.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.398 "hdgst": ${hdgst:-false}, 00:19:47.398 "ddgst": ${ddgst:-false} 00:19:47.398 }, 00:19:47.398 "method": "bdev_nvme_attach_controller" 00:19:47.398 } 00:19:47.398 EOF 00:19:47.398 )") 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.398 { 00:19:47.398 "params": { 00:19:47.398 "name": "Nvme$subsystem", 00:19:47.398 "trtype": "$TEST_TRANSPORT", 00:19:47.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.398 "adrfam": "ipv4", 00:19:47.398 "trsvcid": "$NVMF_PORT", 00:19:47.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.398 "hdgst": ${hdgst:-false}, 00:19:47.398 "ddgst": ${ddgst:-false} 00:19:47.398 }, 00:19:47.398 "method": "bdev_nvme_attach_controller" 00:19:47.398 } 00:19:47.398 EOF 00:19:47.398 )") 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.398 { 00:19:47.398 "params": { 00:19:47.398 "name": "Nvme$subsystem", 00:19:47.398 "trtype": "$TEST_TRANSPORT", 00:19:47.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.398 "adrfam": "ipv4", 00:19:47.398 "trsvcid": "$NVMF_PORT", 00:19:47.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.398 "hdgst": ${hdgst:-false}, 00:19:47.398 "ddgst": ${ddgst:-false} 00:19:47.398 }, 00:19:47.398 "method": "bdev_nvme_attach_controller" 00:19:47.398 } 00:19:47.398 EOF 00:19:47.398 )") 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.398 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.398 { 00:19:47.398 "params": { 00:19:47.398 "name": "Nvme$subsystem", 00:19:47.398 "trtype": "$TEST_TRANSPORT", 00:19:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "$NVMF_PORT", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.399 "hdgst": ${hdgst:-false}, 00:19:47.399 "ddgst": ${ddgst:-false} 00:19:47.399 }, 00:19:47.399 "method": "bdev_nvme_attach_controller" 00:19:47.399 } 00:19:47.399 EOF 00:19:47.399 )") 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.399 { 00:19:47.399 "params": { 00:19:47.399 "name": "Nvme$subsystem", 00:19:47.399 "trtype": "$TEST_TRANSPORT", 00:19:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "$NVMF_PORT", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.399 "hdgst": ${hdgst:-false}, 00:19:47.399 "ddgst": ${ddgst:-false} 00:19:47.399 }, 00:19:47.399 "method": "bdev_nvme_attach_controller" 00:19:47.399 } 00:19:47.399 EOF 00:19:47.399 )") 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.399 { 00:19:47.399 "params": { 00:19:47.399 "name": "Nvme$subsystem", 00:19:47.399 "trtype": "$TEST_TRANSPORT", 00:19:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "$NVMF_PORT", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.399 "hdgst": ${hdgst:-false}, 00:19:47.399 "ddgst": ${ddgst:-false} 00:19:47.399 }, 00:19:47.399 "method": "bdev_nvme_attach_controller" 00:19:47.399 } 00:19:47.399 EOF 00:19:47.399 )") 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.399 { 00:19:47.399 "params": { 00:19:47.399 "name": "Nvme$subsystem", 00:19:47.399 "trtype": "$TEST_TRANSPORT", 00:19:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "$NVMF_PORT", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.399 "hdgst": ${hdgst:-false}, 00:19:47.399 "ddgst": ${ddgst:-false} 00:19:47.399 }, 00:19:47.399 "method": "bdev_nvme_attach_controller" 00:19:47.399 } 00:19:47.399 EOF 00:19:47.399 )") 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.399 { 00:19:47.399 "params": { 00:19:47.399 "name": "Nvme$subsystem", 00:19:47.399 "trtype": "$TEST_TRANSPORT", 00:19:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "$NVMF_PORT", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.399 "hdgst": ${hdgst:-false}, 00:19:47.399 "ddgst": ${ddgst:-false} 00:19:47.399 }, 00:19:47.399 "method": "bdev_nvme_attach_controller" 00:19:47.399 } 00:19:47.399 EOF 00:19:47.399 )") 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.399 { 00:19:47.399 "params": { 00:19:47.399 "name": "Nvme$subsystem", 00:19:47.399 "trtype": "$TEST_TRANSPORT", 00:19:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "$NVMF_PORT", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.399 "hdgst": ${hdgst:-false}, 00:19:47.399 "ddgst": ${ddgst:-false} 00:19:47.399 }, 00:19:47.399 "method": "bdev_nvme_attach_controller" 00:19:47.399 } 00:19:47.399 EOF 00:19:47.399 )") 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.399 { 00:19:47.399 "params": { 00:19:47.399 "name": "Nvme$subsystem", 00:19:47.399 "trtype": "$TEST_TRANSPORT", 00:19:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "$NVMF_PORT", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.399 "hdgst": ${hdgst:-false}, 00:19:47.399 "ddgst": ${ddgst:-false} 00:19:47.399 }, 00:19:47.399 "method": "bdev_nvme_attach_controller" 00:19:47.399 } 00:19:47.399 EOF 00:19:47.399 )") 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:47.399 00:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:47.399 "params": { 00:19:47.399 "name": "Nvme1", 00:19:47.399 "trtype": "tcp", 00:19:47.399 "traddr": "10.0.0.2", 00:19:47.399 "adrfam": "ipv4", 00:19:47.399 "trsvcid": "4420", 00:19:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme2", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme3", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme4", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme5", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme6", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme7", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme8", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme9", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 },{ 00:19:47.400 "params": { 00:19:47.400 "name": "Nvme10", 00:19:47.400 "trtype": "tcp", 00:19:47.400 "traddr": "10.0.0.2", 00:19:47.400 "adrfam": "ipv4", 00:19:47.400 "trsvcid": "4420", 00:19:47.400 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:47.400 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:47.400 "hdgst": false, 00:19:47.400 "ddgst": false 00:19:47.400 }, 00:19:47.400 "method": "bdev_nvme_attach_controller" 00:19:47.400 }' 00:19:47.400 [2024-07-16 00:23:38.863852] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:47.400 [2024-07-16 00:23:38.863965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327990 ] 00:19:47.400 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.400 [2024-07-16 00:23:38.926057] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.658 [2024-07-16 00:23:39.036080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.029 Running I/O for 10 seconds... 00:19:49.029 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.029 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:49.030 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:49.030 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.030 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.289 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:49.290 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:49.549 00:23:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:49.549 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2327990 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2327990 ']' 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2327990 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2327990 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2327990' 00:19:49.808 killing process with pid 2327990 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2327990 00:19:49.808 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2327990 00:19:50.065 Received shutdown signal, test time was about 0.918050 seconds 00:19:50.065 00:19:50.065 Latency(us) 00:19:50.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.065 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme1n1 : 0.89 216.13 13.51 0.00 0.00 292494.35 39418.69 260978.92 00:19:50.065 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme2n1 : 0.91 211.82 13.24 0.00 0.00 292254.15 40972.14 219035.88 00:19:50.065 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme3n1 : 0.88 291.82 18.24 0.00 0.00 207057.73 17670.45 251658.24 00:19:50.065 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme4n1 : 0.85 233.80 14.61 0.00 0.00 250745.49 1941.81 264085.81 00:19:50.065 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme5n1 : 0.90 213.59 13.35 0.00 0.00 270339.79 24466.77 265639.25 00:19:50.065 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme6n1 : 0.91 211.62 13.23 0.00 0.00 267394.72 23398.78 236123.78 00:19:50.065 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme7n1 : 0.86 223.00 13.94 0.00 0.00 245679.34 15340.28 271853.04 00:19:50.065 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme8n1 : 0.86 224.29 14.02 0.00 0.00 238757.10 21262.79 265639.25 00:19:50.065 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme9n1 : 0.92 209.33 13.08 0.00 0.00 253343.42 22330.79 299815.06 00:19:50.065 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.065 Verification LBA range: start 0x0 length 0x400 00:19:50.065 Nvme10n1 : 0.90 214.06 13.38 0.00 0.00 240730.52 27573.67 267192.70 00:19:50.065 =================================================================================================================== 00:19:50.065 Total : 2249.44 140.59 0.00 0.00 254292.25 1941.81 299815.06 00:19:50.322 00:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2327811 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.251 rmmod nvme_tcp 00:19:51.251 rmmod nvme_fabrics 00:19:51.251 rmmod nvme_keyring 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2327811 ']' 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2327811 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2327811 ']' 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2327811 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2327811 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2327811' 00:19:51.251 killing process with pid 2327811 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2327811 00:19:51.251 00:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2327811 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.817 00:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.348 00:19:54.348 real 0m7.619s 00:19:54.348 user 0m22.513s 00:19:54.348 sys 0m1.548s 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:54.348 ************************************ 00:19:54.348 END TEST nvmf_shutdown_tc2 00:19:54.348 ************************************ 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:54.348 ************************************ 00:19:54.348 START TEST nvmf_shutdown_tc3 00:19:54.348 ************************************ 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.348 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:54.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:54.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:54.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:54.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:54.349 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:54.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:19:54.350 00:19:54.350 --- 10.0.0.2 ping statistics --- 00:19:54.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.350 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:19:54.350 00:19:54.350 --- 10.0.0.1 ping statistics --- 00:19:54.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.350 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2328902 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2328902 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2328902 ']' 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.350 [2024-07-16 00:23:45.648333] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:54.350 [2024-07-16 00:23:45.648429] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.350 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.350 [2024-07-16 00:23:45.716168] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.350 [2024-07-16 00:23:45.827010] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.350 [2024-07-16 00:23:45.827068] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.350 [2024-07-16 00:23:45.827097] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.350 [2024-07-16 00:23:45.827109] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.350 [2024-07-16 00:23:45.827119] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.350 [2024-07-16 00:23:45.827212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.350 [2024-07-16 00:23:45.827258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.350 [2024-07-16 00:23:45.827281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:54.350 [2024-07-16 00:23:45.827287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.350 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.608 [2024-07-16 00:23:45.985808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.608 00:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.608 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.609 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.609 Malloc1 00:19:54.609 [2024-07-16 00:23:46.075482] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.609 Malloc2 00:19:54.609 Malloc3 00:19:54.609 Malloc4 00:19:54.909 Malloc5 00:19:54.909 Malloc6 00:19:54.909 Malloc7 00:19:54.909 Malloc8 00:19:54.909 Malloc9 00:19:54.909 Malloc10 00:19:54.909 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.909 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:54.909 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.909 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2328992 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2328992 /var/tmp/bdevperf.sock 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2328992 ']' 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.199 { 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme$subsystem", 00:19:55.199 "trtype": "$TEST_TRANSPORT", 00:19:55.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "$NVMF_PORT", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.199 "hdgst": ${hdgst:-false}, 00:19:55.199 "ddgst": ${ddgst:-false} 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 } 00:19:55.199 EOF 00:19:55.199 )") 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:55.199 00:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme1", 00:19:55.199 "trtype": "tcp", 00:19:55.199 "traddr": "10.0.0.2", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "4420", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.199 "hdgst": false, 00:19:55.199 "ddgst": false 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 },{ 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme2", 00:19:55.199 "trtype": "tcp", 00:19:55.199 "traddr": "10.0.0.2", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "4420", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:55.199 "hdgst": false, 00:19:55.199 "ddgst": false 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 },{ 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme3", 00:19:55.199 "trtype": "tcp", 00:19:55.199 "traddr": "10.0.0.2", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "4420", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:55.199 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:55.199 "hdgst": false, 00:19:55.199 "ddgst": false 00:19:55.199 }, 00:19:55.199 "method": "bdev_nvme_attach_controller" 00:19:55.199 },{ 00:19:55.199 "params": { 00:19:55.199 "name": "Nvme4", 00:19:55.199 "trtype": "tcp", 00:19:55.199 "traddr": "10.0.0.2", 00:19:55.199 "adrfam": "ipv4", 00:19:55.199 "trsvcid": "4420", 00:19:55.199 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:55.200 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:55.200 "hdgst": false, 00:19:55.200 "ddgst": false 00:19:55.200 }, 00:19:55.200 "method": "bdev_nvme_attach_controller" 00:19:55.200 },{ 00:19:55.200 "params": { 00:19:55.200 "name": "Nvme5", 00:19:55.200 "trtype": "tcp", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "adrfam": "ipv4", 00:19:55.200 "trsvcid": "4420", 00:19:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:55.200 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:55.200 "hdgst": false, 00:19:55.200 "ddgst": false 00:19:55.200 }, 00:19:55.200 "method": "bdev_nvme_attach_controller" 00:19:55.200 },{ 00:19:55.200 "params": { 00:19:55.200 "name": "Nvme6", 00:19:55.200 "trtype": "tcp", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "adrfam": "ipv4", 00:19:55.200 "trsvcid": "4420", 00:19:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:55.200 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:55.200 "hdgst": false, 00:19:55.200 "ddgst": false 00:19:55.200 }, 00:19:55.200 "method": "bdev_nvme_attach_controller" 00:19:55.200 },{ 00:19:55.200 "params": { 00:19:55.200 "name": "Nvme7", 00:19:55.200 "trtype": "tcp", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "adrfam": "ipv4", 00:19:55.200 "trsvcid": "4420", 00:19:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:55.200 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:55.200 "hdgst": false, 00:19:55.200 "ddgst": false 00:19:55.200 }, 00:19:55.200 "method": "bdev_nvme_attach_controller" 00:19:55.200 },{ 00:19:55.200 "params": { 00:19:55.200 "name": "Nvme8", 00:19:55.200 "trtype": "tcp", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "adrfam": "ipv4", 00:19:55.200 "trsvcid": "4420", 00:19:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:55.200 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:55.200 "hdgst": false, 00:19:55.200 "ddgst": false 00:19:55.200 }, 00:19:55.200 "method": "bdev_nvme_attach_controller" 00:19:55.200 },{ 00:19:55.200 "params": { 00:19:55.200 "name": "Nvme9", 00:19:55.200 "trtype": "tcp", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "adrfam": "ipv4", 00:19:55.200 "trsvcid": "4420", 00:19:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:55.200 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:55.200 "hdgst": false, 00:19:55.200 "ddgst": false 00:19:55.200 }, 00:19:55.200 "method": "bdev_nvme_attach_controller" 00:19:55.200 },{ 00:19:55.200 "params": { 00:19:55.200 "name": "Nvme10", 00:19:55.200 "trtype": "tcp", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "adrfam": "ipv4", 00:19:55.200 "trsvcid": "4420", 00:19:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:55.200 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:55.200 "hdgst": false, 00:19:55.200 "ddgst": false 00:19:55.200 }, 00:19:55.200 "method": "bdev_nvme_attach_controller" 00:19:55.200 }' 00:19:55.200 [2024-07-16 00:23:46.574995] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:19:55.200 [2024-07-16 00:23:46.575074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328992 ] 00:19:55.200 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.200 [2024-07-16 00:23:46.637561] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.200 [2024-07-16 00:23:46.747507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.722 Running I/O for 10 seconds... 00:19:57.722 00:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.722 00:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:57.722 00:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:57.722 00:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.722 00:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.722 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.723 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.723 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.723 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:57.723 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:57.723 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:57.981 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:58.253 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2328902 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2328902 ']' 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2328902 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2328902 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2328902' 00:19:58.254 killing process with pid 2328902 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2328902 00:19:58.254 00:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2328902 00:19:58.254 [2024-07-16 00:23:49.712403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.712991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.713243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c5c0 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.254 [2024-07-16 00:23:49.714500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.714997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.715128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e540 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.255 [2024-07-16 00:23:49.716905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.716917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.716930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.716942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.716954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.716966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.716979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.716991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.717239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236caa0 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.719993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.256 [2024-07-16 00:23:49.720562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.720573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.720585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d480 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.721985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.722769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4450 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.257 [2024-07-16 00:23:49.724939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.724951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.724963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.724975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.724987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.724999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.725398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dcf0 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.258 [2024-07-16 00:23:49.726602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.726954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e060 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.739582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18157b0 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.739835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.739949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.739961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884660 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.740010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816030 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.740169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a8680 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.740341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a54c0 is same with the state(5) to be set 00:19:58.259 [2024-07-16 00:23:49.740507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.259 [2024-07-16 00:23:49.740595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.259 [2024-07-16 00:23:49.740607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18056c0 is same with the state(5) to be set 00:19:58.260 [2024-07-16 00:23:49.740663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d9ab0 is same with the state(5) to be set 00:19:58.260 [2024-07-16 00:23:49.740825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.740935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.740948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fdd70 is same with the state(5) to be set 00:19:58.260 [2024-07-16 00:23:49.740991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12db610 is same with the state(5) to be set 00:19:58.260 [2024-07-16 00:23:49.741153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.260 [2024-07-16 00:23:49.741258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.741271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884300 is same with the state(5) to be set 00:19:58.260 [2024-07-16 00:23:49.752157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.260 [2024-07-16 00:23:49.752605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.260 [2024-07-16 00:23:49.752619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.752983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.752999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.261 [2024-07-16 00:23:49.753833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.261 [2024-07-16 00:23:49.753848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.753862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.753890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.753906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.753922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.753935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.753951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.753966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.753981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.753995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.754015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.754030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.754046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.754061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.754076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.754090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.754105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.754119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.754134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.754148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.754163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2107210 is same with the state(5) to be set 00:19:58.262 [2024-07-16 00:23:49.754251] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2107210 was disconnected and freed. reset controller. 00:19:58.262 [2024-07-16 00:23:49.754568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18157b0 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1884660 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1816030 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a8680 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a54c0 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18056c0 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d9ab0 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fdd70 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12db610 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.754821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1884300 (9): Bad file descriptor 00:19:58.262 [2024-07-16 00:23:49.755079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.262 [2024-07-16 00:23:49.755673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.262 [2024-07-16 00:23:49.755688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.755984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.755998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-16 00:23:49.756712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.263 [2024-07-16 00:23:49.756727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.756964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.756978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757063] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1971dd0 was disconnected and freed. reset controller. 00:19:58.264 [2024-07-16 00:23:49.757226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.757976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.757989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.758004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.758017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.758033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.758046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.758061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.758074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.758089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.758102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.758117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.758130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.758145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.758159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.264 [2024-07-16 00:23:49.758174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.264 [2024-07-16 00:23:49.758187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.758979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.758994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.759008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.759023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.759037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.759052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.759069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.759085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.759099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.759181] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17d4db0 was disconnected and freed. reset controller. 00:19:58.265 [2024-07-16 00:23:49.763289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:58.265 [2024-07-16 00:23:49.763338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:58.265 [2024-07-16 00:23:49.764112] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.265 [2024-07-16 00:23:49.764185] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.265 [2024-07-16 00:23:49.764262] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.265 [2024-07-16 00:23:49.764291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:58.265 [2024-07-16 00:23:49.764518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.265 [2024-07-16 00:23:49.764551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1816030 with addr=10.0.0.2, port=4420 00:19:58.265 [2024-07-16 00:23:49.764569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816030 is same with the state(5) to be set 00:19:58.265 [2024-07-16 00:23:49.764711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.265 [2024-07-16 00:23:49.764737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18056c0 with addr=10.0.0.2, port=4420 00:19:58.265 [2024-07-16 00:23:49.764752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18056c0 is same with the state(5) to be set 00:19:58.265 [2024-07-16 00:23:49.764834] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.265 [2024-07-16 00:23:49.764914] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.265 [2024-07-16 00:23:49.765251] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.265 [2024-07-16 00:23:49.765577] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:58.265 [2024-07-16 00:23:49.765998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.265 [2024-07-16 00:23:49.766028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18157b0 with addr=10.0.0.2, port=4420 00:19:58.265 [2024-07-16 00:23:49.766044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18157b0 is same with the state(5) to be set 00:19:58.265 [2024-07-16 00:23:49.766066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1816030 (9): Bad file descriptor 00:19:58.265 [2024-07-16 00:23:49.766089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18056c0 (9): Bad file descriptor 00:19:58.265 [2024-07-16 00:23:49.766209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.766235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.766259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.265 [2024-07-16 00:23:49.766275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.265 [2024-07-16 00:23:49.766292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.766983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.766998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.266 [2024-07-16 00:23:49.767495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.266 [2024-07-16 00:23:49.767509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.767971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.767984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.768000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.768013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.768029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.768042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.768057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.768071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.768086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.768099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.768113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d690 is same with the state(5) to be set 00:19:58.267 [2024-07-16 00:23:49.768219] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x198d690 was disconnected and freed. reset controller. 00:19:58.267 [2024-07-16 00:23:49.768242] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.267 [2024-07-16 00:23:49.768388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18157b0 (9): Bad file descriptor 00:19:58.267 [2024-07-16 00:23:49.768414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:58.267 [2024-07-16 00:23:49.768428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:58.267 [2024-07-16 00:23:49.768445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:58.267 [2024-07-16 00:23:49.768465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:58.267 [2024-07-16 00:23:49.768480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:58.267 [2024-07-16 00:23:49.768493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:58.267 [2024-07-16 00:23:49.769675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.769698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.769718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.769733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.769749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.769763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.769778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.769791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.769807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.769820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.769836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.769849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.769864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-07-16 00:23:49.769886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.267 [2024-07-16 00:23:49.769903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.769917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.769933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.769946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.769961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.769980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.769996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.770975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.770990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.771003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.771018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.771031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.771046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.771060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.771084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.771098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.771113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.268 [2024-07-16 00:23:49.771126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.268 [2024-07-16 00:23:49.771141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.771557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.771571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1970980 is same with the state(5) to be set 00:19:58.269 [2024-07-16 00:23:49.772820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.772843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.772863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.772888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.772906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.772919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.772935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.772948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.772964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.772977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.772992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.269 [2024-07-16 00:23:49.773614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.269 [2024-07-16 00:23:49.773629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.773984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.773997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.774687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.774701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d38a0 is same with the state(5) to be set 00:19:58.270 [2024-07-16 00:23:49.775948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.775971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.270 [2024-07-16 00:23:49.775991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.270 [2024-07-16 00:23:49.776006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.271 [2024-07-16 00:23:49.776807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.271 [2024-07-16 00:23:49.776821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.776853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.776868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.776890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.776906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.776919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.776935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.776948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.776963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.776976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.776991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.777812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.777826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f800 is same with the state(5) to be set 00:19:58.272 [2024-07-16 00:23:49.779073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.779117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.779147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.779176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.779210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.779239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.779269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.272 [2024-07-16 00:23:49.779297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.272 [2024-07-16 00:23:49.779310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.779984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.779999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.273 [2024-07-16 00:23:49.780526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.273 [2024-07-16 00:23:49.780541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.780939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.780952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aee70 is same with the state(5) to be set 00:19:58.274 [2024-07-16 00:23:49.782179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.274 [2024-07-16 00:23:49.782896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.274 [2024-07-16 00:23:49.782910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.782926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.782939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.782954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.782986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.783972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.783986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.784001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.784015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.784030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.784043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.784057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2497250 is same with the state(5) to be set 00:19:58.275 [2024-07-16 00:23:49.785706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.275 [2024-07-16 00:23:49.785731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.275 [2024-07-16 00:23:49.785752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.785978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.785993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.276 [2024-07-16 00:23:49.786772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.276 [2024-07-16 00:23:49.786785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.786800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.786813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.786832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.786846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.786861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.786875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.786901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.786914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.786930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.786943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.786958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.786971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.786986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.786999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.277 [2024-07-16 00:23:49.787578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.277 [2024-07-16 00:23:49.787592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x263eee0 is same with the state(5) to be set 00:19:58.277 [2024-07-16 00:23:49.789163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.277 [2024-07-16 00:23:49.789189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.277 [2024-07-16 00:23:49.789206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.277 [2024-07-16 00:23:49.789229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:58.277 [2024-07-16 00:23:49.789247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:58.277 [2024-07-16 00:23:49.789304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:58.277 [2024-07-16 00:23:49.789321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:58.277 [2024-07-16 00:23:49.789339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:58.277 [2024-07-16 00:23:49.789414] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.277 [2024-07-16 00:23:49.789439] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.277 [2024-07-16 00:23:49.789461] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.277 [2024-07-16 00:23:49.789485] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.277 [2024-07-16 00:23:49.789504] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.277 [2024-07-16 00:23:49.789609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:58.277 [2024-07-16 00:23:49.789633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:58.277 [2024-07-16 00:23:49.789649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:58.277 task offset: 24576 on job bdev=Nvme7n1 fails 00:19:58.277 00:19:58.277 Latency(us) 00:19:58.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.277 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.277 Job: Nvme1n1 ended in about 0.91 seconds with error 00:19:58.277 Verification LBA range: start 0x0 length 0x400 00:19:58.277 Nvme1n1 : 0.91 209.99 13.12 70.00 0.00 226047.48 5242.88 253211.69 00:19:58.277 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.277 Job: Nvme2n1 ended in about 0.92 seconds with error 00:19:58.277 Verification LBA range: start 0x0 length 0x400 00:19:58.277 Nvme2n1 : 0.92 139.53 8.72 69.76 0.00 296392.12 22136.60 239230.67 00:19:58.277 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.277 Job: Nvme3n1 ended in about 0.91 seconds with error 00:19:58.277 Verification LBA range: start 0x0 length 0x400 00:19:58.277 Nvme3n1 : 0.91 211.78 13.24 70.59 0.00 214901.76 17087.91 257872.02 00:19:58.277 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.277 Job: Nvme4n1 ended in about 0.92 seconds with error 00:19:58.277 Verification LBA range: start 0x0 length 0x400 00:19:58.277 Nvme4n1 : 0.92 139.05 8.69 69.53 0.00 285005.50 22622.06 256318.58 00:19:58.277 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.277 Job: Nvme5n1 ended in about 0.91 seconds with error 00:19:58.277 Verification LBA range: start 0x0 length 0x400 00:19:58.277 Nvme5n1 : 0.91 211.52 13.22 70.51 0.00 205940.24 10243.03 257872.02 00:19:58.277 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.277 Job: Nvme6n1 ended in about 0.92 seconds with error 00:19:58.277 Verification LBA range: start 0x0 length 0x400 00:19:58.277 Nvme6n1 : 0.92 144.00 9.00 69.29 0.00 266956.63 20874.43 251658.24 00:19:58.277 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.277 Job: Nvme7n1 ended in about 0.91 seconds with error 00:19:58.277 Verification LBA range: start 0x0 length 0x400 00:19:58.278 Nvme7n1 : 0.91 212.13 13.26 70.71 0.00 196266.86 20486.07 248551.35 00:19:58.278 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.278 Job: Nvme8n1 ended in about 0.93 seconds with error 00:19:58.278 Verification LBA range: start 0x0 length 0x400 00:19:58.278 Nvme8n1 : 0.93 138.12 8.63 69.06 0.00 263116.93 37282.70 278066.82 00:19:58.278 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.278 Job: Nvme9n1 ended in about 0.93 seconds with error 00:19:58.278 Verification LBA range: start 0x0 length 0x400 00:19:58.278 Nvme9n1 : 0.93 137.66 8.60 68.83 0.00 258137.76 20486.07 253211.69 00:19:58.278 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.278 Job: Nvme10n1 ended in about 0.93 seconds with error 00:19:58.278 Verification LBA range: start 0x0 length 0x400 00:19:58.278 Nvme10n1 : 0.93 137.14 8.57 68.57 0.00 253452.39 21068.61 285834.05 00:19:58.278 =================================================================================================================== 00:19:58.278 Total : 1680.92 105.06 696.85 0.00 242462.44 5242.88 285834.05 00:19:58.278 [2024-07-16 00:23:49.814989] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:58.278 [2024-07-16 00:23:49.815070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:58.278 [2024-07-16 00:23:49.815481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.815519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d9ab0 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.815540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d9ab0 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.815697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.815723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a54c0 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.815739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a54c0 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.815899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.815925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fdd70 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.815941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fdd70 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.817785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.818015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.818042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12db610 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.818059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12db610 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.818196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.818233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a8680 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.818249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a8680 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.818389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.818415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1884660 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.818431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884660 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.818564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.818589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1884300 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.818604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884300 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.818629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d9ab0 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.818653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a54c0 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.818671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fdd70 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.818720] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.278 [2024-07-16 00:23:49.818744] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.278 [2024-07-16 00:23:49.818775] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.278 [2024-07-16 00:23:49.818797] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.278 [2024-07-16 00:23:49.818815] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:58.278 [2024-07-16 00:23:49.818903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:58.278 [2024-07-16 00:23:49.818927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:58.278 [2024-07-16 00:23:49.818995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12db610 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.819021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a8680 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.819039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1884660 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.819056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1884300 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.819072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.819085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.819101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.278 [2024-07-16 00:23:49.819119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.819132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.819144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:58.278 [2024-07-16 00:23:49.819160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.819173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.819190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:58.278 [2024-07-16 00:23:49.819292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:58.278 [2024-07-16 00:23:49.819315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.819328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.819339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.819493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.819519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18056c0 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.819535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18056c0 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.819675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.819700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1816030 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.819716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816030 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.819729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.819741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.819754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:58.278 [2024-07-16 00:23:49.819771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.819784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.819796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:58.278 [2024-07-16 00:23:49.819811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.819823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.819835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:58.278 [2024-07-16 00:23:49.819850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.819863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.819900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:58.278 [2024-07-16 00:23:49.819940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.819958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.819969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.819982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.820120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.278 [2024-07-16 00:23:49.820145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18157b0 with addr=10.0.0.2, port=4420 00:19:58.278 [2024-07-16 00:23:49.820160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18157b0 is same with the state(5) to be set 00:19:58.278 [2024-07-16 00:23:49.820183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18056c0 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.820202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1816030 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.820247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18157b0 (9): Bad file descriptor 00:19:58.278 [2024-07-16 00:23:49.820269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.820281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.820294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:58.278 [2024-07-16 00:23:49.820310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:58.278 [2024-07-16 00:23:49.820323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:58.278 [2024-07-16 00:23:49.820335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:58.278 [2024-07-16 00:23:49.820372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.820389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.278 [2024-07-16 00:23:49.820401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:58.279 [2024-07-16 00:23:49.820413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:58.279 [2024-07-16 00:23:49.820426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:58.279 [2024-07-16 00:23:49.820463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.846 00:23:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:58.846 00:23:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2328992 00:19:59.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2328992) - No such process 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.781 rmmod nvme_tcp 00:19:59.781 rmmod nvme_fabrics 00:19:59.781 rmmod nvme_keyring 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.781 00:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.306 00:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.306 00:20:02.306 real 0m7.987s 00:20:02.306 user 0m20.520s 00:20:02.306 sys 0m1.466s 00:20:02.306 00:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.306 00:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.306 ************************************ 00:20:02.306 END TEST nvmf_shutdown_tc3 00:20:02.306 ************************************ 00:20:02.306 00:23:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:02.306 00:23:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:02.306 00:20:02.306 real 0m27.621s 00:20:02.306 user 1m17.276s 00:20:02.306 sys 0m6.369s 00:20:02.306 00:23:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.306 00:23:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:02.306 ************************************ 00:20:02.306 END TEST nvmf_shutdown 00:20:02.306 ************************************ 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:02.306 00:23:53 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.306 00:23:53 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.306 00:23:53 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:02.306 00:23:53 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.306 00:23:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.306 ************************************ 00:20:02.306 START TEST nvmf_multicontroller 00:20:02.306 ************************************ 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:02.306 * Looking for test storage... 00:20:02.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.306 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.307 00:23:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:04.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:04.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.206 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:04.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:04.207 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:20:04.207 00:20:04.207 --- 10.0.0.2 ping statistics --- 00:20:04.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.207 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:20:04.207 00:20:04.207 --- 10.0.0.1 ping statistics --- 00:20:04.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.207 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2331561 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2331561 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2331561 ']' 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.207 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 [2024-07-16 00:23:55.708744] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:04.207 [2024-07-16 00:23:55.708813] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.207 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.207 [2024-07-16 00:23:55.772555] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.465 [2024-07-16 00:23:55.883315] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.465 [2024-07-16 00:23:55.883370] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.465 [2024-07-16 00:23:55.883397] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.465 [2024-07-16 00:23:55.883409] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.465 [2024-07-16 00:23:55.883418] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.465 [2024-07-16 00:23:55.883556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.465 [2024-07-16 00:23:55.883619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.465 [2024-07-16 00:23:55.883623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.465 00:23:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.465 [2024-07-16 00:23:56.030638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.465 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.723 Malloc0 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.723 [2024-07-16 00:23:56.099404] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:04.723 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 [2024-07-16 00:23:56.107254] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 Malloc1 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2331628 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2331628 /var/tmp/bdevperf.sock 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2331628 ']' 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.724 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.981 NVMe0n1 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.981 1 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.981 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.238 request: 00:20:05.238 { 00:20:05.238 "name": "NVMe0", 00:20:05.238 "trtype": "tcp", 00:20:05.238 "traddr": "10.0.0.2", 00:20:05.238 "adrfam": "ipv4", 00:20:05.238 "trsvcid": "4420", 00:20:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.238 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:05.238 "hostaddr": "10.0.0.2", 00:20:05.238 "hostsvcid": "60000", 00:20:05.238 "prchk_reftag": false, 00:20:05.238 "prchk_guard": false, 00:20:05.238 "hdgst": false, 00:20:05.238 "ddgst": false, 00:20:05.238 "method": "bdev_nvme_attach_controller", 00:20:05.238 "req_id": 1 00:20:05.238 } 00:20:05.238 Got JSON-RPC error response 00:20:05.238 response: 00:20:05.238 { 00:20:05.238 "code": -114, 00:20:05.238 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:05.238 } 00:20:05.238 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:05.238 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:05.238 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.238 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.238 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.238 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.239 request: 00:20:05.239 { 00:20:05.239 "name": "NVMe0", 00:20:05.239 "trtype": "tcp", 00:20:05.239 "traddr": "10.0.0.2", 00:20:05.239 "adrfam": "ipv4", 00:20:05.239 "trsvcid": "4420", 00:20:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:05.239 "hostaddr": "10.0.0.2", 00:20:05.239 "hostsvcid": "60000", 00:20:05.239 "prchk_reftag": false, 00:20:05.239 "prchk_guard": false, 00:20:05.239 "hdgst": false, 00:20:05.239 "ddgst": false, 00:20:05.239 "method": "bdev_nvme_attach_controller", 00:20:05.239 "req_id": 1 00:20:05.239 } 00:20:05.239 Got JSON-RPC error response 00:20:05.239 response: 00:20:05.239 { 00:20:05.239 "code": -114, 00:20:05.239 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:05.239 } 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.239 request: 00:20:05.239 { 00:20:05.239 "name": "NVMe0", 00:20:05.239 "trtype": "tcp", 00:20:05.239 "traddr": "10.0.0.2", 00:20:05.239 "adrfam": "ipv4", 00:20:05.239 "trsvcid": "4420", 00:20:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.239 "hostaddr": "10.0.0.2", 00:20:05.239 "hostsvcid": "60000", 00:20:05.239 "prchk_reftag": false, 00:20:05.239 "prchk_guard": false, 00:20:05.239 "hdgst": false, 00:20:05.239 "ddgst": false, 00:20:05.239 "multipath": "disable", 00:20:05.239 "method": "bdev_nvme_attach_controller", 00:20:05.239 "req_id": 1 00:20:05.239 } 00:20:05.239 Got JSON-RPC error response 00:20:05.239 response: 00:20:05.239 { 00:20:05.239 "code": -114, 00:20:05.239 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:05.239 } 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.239 request: 00:20:05.239 { 00:20:05.239 "name": "NVMe0", 00:20:05.239 "trtype": "tcp", 00:20:05.239 "traddr": "10.0.0.2", 00:20:05.239 "adrfam": "ipv4", 00:20:05.239 "trsvcid": "4420", 00:20:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.239 "hostaddr": "10.0.0.2", 00:20:05.239 "hostsvcid": "60000", 00:20:05.239 "prchk_reftag": false, 00:20:05.239 "prchk_guard": false, 00:20:05.239 "hdgst": false, 00:20:05.239 "ddgst": false, 00:20:05.239 "multipath": "failover", 00:20:05.239 "method": "bdev_nvme_attach_controller", 00:20:05.239 "req_id": 1 00:20:05.239 } 00:20:05.239 Got JSON-RPC error response 00:20:05.239 response: 00:20:05.239 { 00:20:05.239 "code": -114, 00:20:05.239 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:05.239 } 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.239 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:05.496 00:23:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.868 0 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2331628 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2331628 ']' 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2331628 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2331628 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2331628' 00:20:06.868 killing process with pid 2331628 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2331628 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2331628 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:06.868 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.868 [2024-07-16 00:23:56.211815] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:06.868 [2024-07-16 00:23:56.211903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331628 ] 00:20:06.868 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.868 [2024-07-16 00:23:56.270995] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.868 [2024-07-16 00:23:56.379600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.868 [2024-07-16 00:23:56.949452] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d2b872d3-f1b6-4951-8f90-1ea6ba53645d already exists 00:20:06.868 [2024-07-16 00:23:56.949491] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d2b872d3-f1b6-4951-8f90-1ea6ba53645d alias for bdev NVMe1n1 00:20:06.868 [2024-07-16 00:23:56.949522] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:06.868 Running I/O for 1 seconds... 00:20:06.868 00:20:06.868 Latency(us) 00:20:06.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.868 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:06.868 NVMe0n1 : 1.00 16540.71 64.61 0.00 0.00 7725.86 4951.61 15243.19 00:20:06.868 =================================================================================================================== 00:20:06.868 Total : 16540.71 64.61 0.00 0.00 7725.86 4951.61 15243.19 00:20:06.868 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.868 00:20:06.868 Latency(us) 00:20:06.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.868 =================================================================================================================== 00:20:06.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.868 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.868 rmmod nvme_tcp 00:20:06.868 rmmod nvme_fabrics 00:20:06.868 rmmod nvme_keyring 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2331561 ']' 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2331561 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2331561 ']' 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2331561 00:20:06.868 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:07.126 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.126 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2331561 00:20:07.126 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.126 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.126 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2331561' 00:20:07.126 killing process with pid 2331561 00:20:07.126 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2331561 00:20:07.126 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2331561 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.383 00:23:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.293 00:24:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.294 00:20:09.294 real 0m7.409s 00:20:09.294 user 0m11.596s 00:20:09.294 sys 0m2.296s 00:20:09.294 00:24:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.294 00:24:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.294 ************************************ 00:20:09.294 END TEST nvmf_multicontroller 00:20:09.294 ************************************ 00:20:09.294 00:24:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:09.294 00:24:00 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:09.294 00:24:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:09.294 00:24:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.294 00:24:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.553 ************************************ 00:20:09.553 START TEST nvmf_aer 00:20:09.553 ************************************ 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:09.553 * Looking for test storage... 00:20:09.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.553 00:24:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.554 00:24:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:11.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.457 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:11.458 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:11.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:11.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:20:11.458 00:20:11.458 --- 10.0.0.2 ping statistics --- 00:20:11.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.458 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:20:11.458 00:20:11.458 --- 10.0.0.1 ping statistics --- 00:20:11.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.458 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2333951 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2333951 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2333951 ']' 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.458 00:24:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.458 [2024-07-16 00:24:03.049136] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:11.458 [2024-07-16 00:24:03.049219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.716 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.716 [2024-07-16 00:24:03.123758] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.716 [2024-07-16 00:24:03.235670] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.716 [2024-07-16 00:24:03.235731] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.716 [2024-07-16 00:24:03.235760] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.716 [2024-07-16 00:24:03.235772] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.716 [2024-07-16 00:24:03.235781] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.716 [2024-07-16 00:24:03.235854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.716 [2024-07-16 00:24:03.235885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.716 [2024-07-16 00:24:03.235941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.716 [2024-07-16 00:24:03.235944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.975 [2024-07-16 00:24:03.400789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.975 Malloc0 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.975 [2024-07-16 00:24:03.454549] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.975 [ 00:20:11.975 { 00:20:11.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.975 "subtype": "Discovery", 00:20:11.975 "listen_addresses": [], 00:20:11.975 "allow_any_host": true, 00:20:11.975 "hosts": [] 00:20:11.975 }, 00:20:11.975 { 00:20:11.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.975 "subtype": "NVMe", 00:20:11.975 "listen_addresses": [ 00:20:11.975 { 00:20:11.975 "trtype": "TCP", 00:20:11.975 "adrfam": "IPv4", 00:20:11.975 "traddr": "10.0.0.2", 00:20:11.975 "trsvcid": "4420" 00:20:11.975 } 00:20:11.975 ], 00:20:11.975 "allow_any_host": true, 00:20:11.975 "hosts": [], 00:20:11.975 "serial_number": "SPDK00000000000001", 00:20:11.975 "model_number": "SPDK bdev Controller", 00:20:11.975 "max_namespaces": 2, 00:20:11.975 "min_cntlid": 1, 00:20:11.975 "max_cntlid": 65519, 00:20:11.975 "namespaces": [ 00:20:11.975 { 00:20:11.975 "nsid": 1, 00:20:11.975 "bdev_name": "Malloc0", 00:20:11.975 "name": "Malloc0", 00:20:11.975 "nguid": "C37E8786C2D6431CA4EEDA0FFC469102", 00:20:11.975 "uuid": "c37e8786-c2d6-431c-a4ee-da0ffc469102" 00:20:11.975 } 00:20:11.975 ] 00:20:11.975 } 00:20:11.975 ] 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2333974 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:11.975 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:11.975 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:12.233 Malloc1 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:12.233 [ 00:20:12.233 { 00:20:12.233 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:12.233 "subtype": "Discovery", 00:20:12.233 "listen_addresses": [], 00:20:12.233 "allow_any_host": true, 00:20:12.233 "hosts": [] 00:20:12.233 }, 00:20:12.233 { 00:20:12.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.233 "subtype": "NVMe", 00:20:12.233 "listen_addresses": [ 00:20:12.233 { 00:20:12.233 "trtype": "TCP", 00:20:12.233 "adrfam": "IPv4", 00:20:12.233 "traddr": "10.0.0.2", 00:20:12.233 "trsvcid": "4420" 00:20:12.233 } 00:20:12.233 ], 00:20:12.233 "allow_any_host": true, 00:20:12.233 "hosts": [], 00:20:12.233 "serial_number": "SPDK00000000000001", 00:20:12.233 "model_number": "SPDK bdev Controller", 00:20:12.233 "max_namespaces": 2, 00:20:12.233 "min_cntlid": 1, 00:20:12.233 "max_cntlid": 65519, 00:20:12.233 "namespaces": [ 00:20:12.233 { 00:20:12.233 "nsid": 1, 00:20:12.233 "bdev_name": "Malloc0", 00:20:12.233 "name": "Malloc0", 00:20:12.233 "nguid": "C37E8786C2D6431CA4EEDA0FFC469102", 00:20:12.233 "uuid": "c37e8786-c2d6-431c-a4ee-da0ffc469102" 00:20:12.233 }, 00:20:12.233 { 00:20:12.233 "nsid": 2, 00:20:12.233 "bdev_name": "Malloc1", 00:20:12.233 "name": "Malloc1", 00:20:12.233 "nguid": "B6D4209C42B442CBB223F616B4123B7E", 00:20:12.233 "uuid": "b6d4209c-42b4-42cb-b223-f616b4123b7e" 00:20:12.233 } 00:20:12.233 ] 00:20:12.233 } 00:20:12.233 ] 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2333974 00:20:12.233 Asynchronous Event Request test 00:20:12.233 Attaching to 10.0.0.2 00:20:12.233 Attached to 10.0.0.2 00:20:12.233 Registering asynchronous event callbacks... 00:20:12.233 Starting namespace attribute notice tests for all controllers... 00:20:12.233 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:12.233 aer_cb - Changed Namespace 00:20:12.233 Cleaning up... 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.233 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.233 rmmod nvme_tcp 00:20:12.233 rmmod nvme_fabrics 00:20:12.491 rmmod nvme_keyring 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2333951 ']' 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2333951 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2333951 ']' 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2333951 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333951 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333951' 00:20:12.491 killing process with pid 2333951 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2333951 00:20:12.491 00:24:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2333951 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.750 00:24:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.651 00:24:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:14.651 00:20:14.651 real 0m5.271s 00:20:14.651 user 0m4.154s 00:20:14.651 sys 0m1.833s 00:20:14.651 00:24:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.651 00:24:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.651 ************************************ 00:20:14.651 END TEST nvmf_aer 00:20:14.651 ************************************ 00:20:14.651 00:24:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:14.651 00:24:06 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.651 00:24:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:14.651 00:24:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.651 00:24:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:14.651 ************************************ 00:20:14.651 START TEST nvmf_async_init 00:20:14.651 ************************************ 00:20:14.651 00:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.909 * Looking for test storage... 00:20:14.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=02a220601b4346d8aef85b7f19246edd 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.909 00:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:16.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:16.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:16.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:16.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.808 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.809 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.809 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.809 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.809 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:20:17.067 00:20:17.067 --- 10.0.0.2 ping statistics --- 00:20:17.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.067 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:20:17.067 00:20:17.067 --- 10.0.0.1 ping statistics --- 00:20:17.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.067 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2336466 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2336466 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2336466 ']' 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.067 00:24:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.067 [2024-07-16 00:24:08.557082] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:17.067 [2024-07-16 00:24:08.557177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.067 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.067 [2024-07-16 00:24:08.626386] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.325 [2024-07-16 00:24:08.745908] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.325 [2024-07-16 00:24:08.745979] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.325 [2024-07-16 00:24:08.745992] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.325 [2024-07-16 00:24:08.746023] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.325 [2024-07-16 00:24:08.746034] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.325 [2024-07-16 00:24:08.746064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.891 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.149 [2024-07-16 00:24:09.535536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.149 null0 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 02a220601b4346d8aef85b7f19246edd 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.149 [2024-07-16 00:24:09.575767] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.149 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.407 nvme0n1 00:20:18.407 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.407 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:18.407 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.407 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.407 [ 00:20:18.407 { 00:20:18.407 "name": "nvme0n1", 00:20:18.407 "aliases": [ 00:20:18.407 "02a22060-1b43-46d8-aef8-5b7f19246edd" 00:20:18.407 ], 00:20:18.407 "product_name": "NVMe disk", 00:20:18.407 "block_size": 512, 00:20:18.407 "num_blocks": 2097152, 00:20:18.407 "uuid": "02a22060-1b43-46d8-aef8-5b7f19246edd", 00:20:18.407 "assigned_rate_limits": { 00:20:18.407 "rw_ios_per_sec": 0, 00:20:18.407 "rw_mbytes_per_sec": 0, 00:20:18.407 "r_mbytes_per_sec": 0, 00:20:18.407 "w_mbytes_per_sec": 0 00:20:18.407 }, 00:20:18.407 "claimed": false, 00:20:18.407 "zoned": false, 00:20:18.407 "supported_io_types": { 00:20:18.407 "read": true, 00:20:18.407 "write": true, 00:20:18.407 "unmap": false, 00:20:18.407 "flush": true, 00:20:18.407 "reset": true, 00:20:18.407 "nvme_admin": true, 00:20:18.407 "nvme_io": true, 00:20:18.407 "nvme_io_md": false, 00:20:18.407 "write_zeroes": true, 00:20:18.407 "zcopy": false, 00:20:18.407 "get_zone_info": false, 00:20:18.407 "zone_management": false, 00:20:18.407 "zone_append": false, 00:20:18.407 "compare": true, 00:20:18.407 "compare_and_write": true, 00:20:18.407 "abort": true, 00:20:18.407 "seek_hole": false, 00:20:18.407 "seek_data": false, 00:20:18.407 "copy": true, 00:20:18.408 "nvme_iov_md": false 00:20:18.408 }, 00:20:18.408 "memory_domains": [ 00:20:18.408 { 00:20:18.408 "dma_device_id": "system", 00:20:18.408 "dma_device_type": 1 00:20:18.408 } 00:20:18.408 ], 00:20:18.408 "driver_specific": { 00:20:18.408 "nvme": [ 00:20:18.408 { 00:20:18.408 "trid": { 00:20:18.408 "trtype": "TCP", 00:20:18.408 "adrfam": "IPv4", 00:20:18.408 "traddr": "10.0.0.2", 00:20:18.408 "trsvcid": "4420", 00:20:18.408 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:18.408 }, 00:20:18.408 "ctrlr_data": { 00:20:18.408 "cntlid": 1, 00:20:18.408 "vendor_id": "0x8086", 00:20:18.408 "model_number": "SPDK bdev Controller", 00:20:18.408 "serial_number": "00000000000000000000", 00:20:18.408 "firmware_revision": "24.09", 00:20:18.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.408 "oacs": { 00:20:18.408 "security": 0, 00:20:18.408 "format": 0, 00:20:18.408 "firmware": 0, 00:20:18.408 "ns_manage": 0 00:20:18.408 }, 00:20:18.408 "multi_ctrlr": true, 00:20:18.408 "ana_reporting": false 00:20:18.408 }, 00:20:18.408 "vs": { 00:20:18.408 "nvme_version": "1.3" 00:20:18.408 }, 00:20:18.408 "ns_data": { 00:20:18.408 "id": 1, 00:20:18.408 "can_share": true 00:20:18.408 } 00:20:18.408 } 00:20:18.408 ], 00:20:18.408 "mp_policy": "active_passive" 00:20:18.408 } 00:20:18.408 } 00:20:18.408 ] 00:20:18.408 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.408 00:24:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:18.408 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.408 00:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.408 [2024-07-16 00:24:09.829067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:18.408 [2024-07-16 00:24:09.829165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20362b0 (9): Bad file descriptor 00:20:18.408 [2024-07-16 00:24:10.002040] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:18.408 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.408 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:18.408 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.408 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.408 [ 00:20:18.408 { 00:20:18.408 "name": "nvme0n1", 00:20:18.408 "aliases": [ 00:20:18.408 "02a22060-1b43-46d8-aef8-5b7f19246edd" 00:20:18.408 ], 00:20:18.408 "product_name": "NVMe disk", 00:20:18.408 "block_size": 512, 00:20:18.408 "num_blocks": 2097152, 00:20:18.408 "uuid": "02a22060-1b43-46d8-aef8-5b7f19246edd", 00:20:18.408 "assigned_rate_limits": { 00:20:18.408 "rw_ios_per_sec": 0, 00:20:18.408 "rw_mbytes_per_sec": 0, 00:20:18.408 "r_mbytes_per_sec": 0, 00:20:18.408 "w_mbytes_per_sec": 0 00:20:18.408 }, 00:20:18.408 "claimed": false, 00:20:18.408 "zoned": false, 00:20:18.408 "supported_io_types": { 00:20:18.408 "read": true, 00:20:18.408 "write": true, 00:20:18.408 "unmap": false, 00:20:18.408 "flush": true, 00:20:18.408 "reset": true, 00:20:18.408 "nvme_admin": true, 00:20:18.408 "nvme_io": true, 00:20:18.408 "nvme_io_md": false, 00:20:18.408 "write_zeroes": true, 00:20:18.408 "zcopy": false, 00:20:18.408 "get_zone_info": false, 00:20:18.408 "zone_management": false, 00:20:18.408 "zone_append": false, 00:20:18.408 "compare": true, 00:20:18.408 "compare_and_write": true, 00:20:18.408 "abort": true, 00:20:18.408 "seek_hole": false, 00:20:18.408 "seek_data": false, 00:20:18.408 "copy": true, 00:20:18.408 "nvme_iov_md": false 00:20:18.408 }, 00:20:18.408 "memory_domains": [ 00:20:18.408 { 00:20:18.408 "dma_device_id": "system", 00:20:18.408 "dma_device_type": 1 00:20:18.408 } 00:20:18.408 ], 00:20:18.408 "driver_specific": { 00:20:18.408 "nvme": [ 00:20:18.408 { 00:20:18.408 "trid": { 00:20:18.408 "trtype": "TCP", 00:20:18.408 "adrfam": "IPv4", 00:20:18.408 "traddr": "10.0.0.2", 00:20:18.408 "trsvcid": "4420", 00:20:18.408 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:18.408 }, 00:20:18.408 "ctrlr_data": { 00:20:18.408 "cntlid": 2, 00:20:18.408 "vendor_id": "0x8086", 00:20:18.408 "model_number": "SPDK bdev Controller", 00:20:18.408 "serial_number": "00000000000000000000", 00:20:18.408 "firmware_revision": "24.09", 00:20:18.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.408 "oacs": { 00:20:18.408 "security": 0, 00:20:18.408 "format": 0, 00:20:18.408 "firmware": 0, 00:20:18.408 "ns_manage": 0 00:20:18.408 }, 00:20:18.408 "multi_ctrlr": true, 00:20:18.408 "ana_reporting": false 00:20:18.408 }, 00:20:18.408 "vs": { 00:20:18.408 "nvme_version": "1.3" 00:20:18.408 }, 00:20:18.408 "ns_data": { 00:20:18.408 "id": 1, 00:20:18.666 "can_share": true 00:20:18.666 } 00:20:18.666 } 00:20:18.666 ], 00:20:18.666 "mp_policy": "active_passive" 00:20:18.666 } 00:20:18.666 } 00:20:18.666 ] 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6sVbMOXkM6 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6sVbMOXkM6 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.666 [2024-07-16 00:24:10.057852] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.666 [2024-07-16 00:24:10.058094] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6sVbMOXkM6 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.666 [2024-07-16 00:24:10.065857] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6sVbMOXkM6 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.666 [2024-07-16 00:24:10.073893] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.666 [2024-07-16 00:24:10.073967] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:18.666 nvme0n1 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.666 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.666 [ 00:20:18.666 { 00:20:18.666 "name": "nvme0n1", 00:20:18.666 "aliases": [ 00:20:18.666 "02a22060-1b43-46d8-aef8-5b7f19246edd" 00:20:18.666 ], 00:20:18.666 "product_name": "NVMe disk", 00:20:18.666 "block_size": 512, 00:20:18.666 "num_blocks": 2097152, 00:20:18.666 "uuid": "02a22060-1b43-46d8-aef8-5b7f19246edd", 00:20:18.666 "assigned_rate_limits": { 00:20:18.666 "rw_ios_per_sec": 0, 00:20:18.666 "rw_mbytes_per_sec": 0, 00:20:18.666 "r_mbytes_per_sec": 0, 00:20:18.667 "w_mbytes_per_sec": 0 00:20:18.667 }, 00:20:18.667 "claimed": false, 00:20:18.667 "zoned": false, 00:20:18.667 "supported_io_types": { 00:20:18.667 "read": true, 00:20:18.667 "write": true, 00:20:18.667 "unmap": false, 00:20:18.667 "flush": true, 00:20:18.667 "reset": true, 00:20:18.667 "nvme_admin": true, 00:20:18.667 "nvme_io": true, 00:20:18.667 "nvme_io_md": false, 00:20:18.667 "write_zeroes": true, 00:20:18.667 "zcopy": false, 00:20:18.667 "get_zone_info": false, 00:20:18.667 "zone_management": false, 00:20:18.667 "zone_append": false, 00:20:18.667 "compare": true, 00:20:18.667 "compare_and_write": true, 00:20:18.667 "abort": true, 00:20:18.667 "seek_hole": false, 00:20:18.667 "seek_data": false, 00:20:18.667 "copy": true, 00:20:18.667 "nvme_iov_md": false 00:20:18.667 }, 00:20:18.667 "memory_domains": [ 00:20:18.667 { 00:20:18.667 "dma_device_id": "system", 00:20:18.667 "dma_device_type": 1 00:20:18.667 } 00:20:18.667 ], 00:20:18.667 "driver_specific": { 00:20:18.667 "nvme": [ 00:20:18.667 { 00:20:18.667 "trid": { 00:20:18.667 "trtype": "TCP", 00:20:18.667 "adrfam": "IPv4", 00:20:18.667 "traddr": "10.0.0.2", 00:20:18.667 "trsvcid": "4421", 00:20:18.667 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:18.667 }, 00:20:18.667 "ctrlr_data": { 00:20:18.667 "cntlid": 3, 00:20:18.667 "vendor_id": "0x8086", 00:20:18.667 "model_number": "SPDK bdev Controller", 00:20:18.667 "serial_number": "00000000000000000000", 00:20:18.667 "firmware_revision": "24.09", 00:20:18.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.667 "oacs": { 00:20:18.667 "security": 0, 00:20:18.667 "format": 0, 00:20:18.667 "firmware": 0, 00:20:18.667 "ns_manage": 0 00:20:18.667 }, 00:20:18.667 "multi_ctrlr": true, 00:20:18.667 "ana_reporting": false 00:20:18.667 }, 00:20:18.667 "vs": { 00:20:18.667 "nvme_version": "1.3" 00:20:18.667 }, 00:20:18.667 "ns_data": { 00:20:18.667 "id": 1, 00:20:18.667 "can_share": true 00:20:18.667 } 00:20:18.667 } 00:20:18.667 ], 00:20:18.667 "mp_policy": "active_passive" 00:20:18.667 } 00:20:18.667 } 00:20:18.667 ] 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.6sVbMOXkM6 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.667 rmmod nvme_tcp 00:20:18.667 rmmod nvme_fabrics 00:20:18.667 rmmod nvme_keyring 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2336466 ']' 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2336466 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2336466 ']' 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2336466 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2336466 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2336466' 00:20:18.667 killing process with pid 2336466 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2336466 00:20:18.667 [2024-07-16 00:24:10.257083] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:18.667 [2024-07-16 00:24:10.257120] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:18.667 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2336466 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.926 00:24:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.458 00:24:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:21.458 00:20:21.458 real 0m6.312s 00:20:21.458 user 0m2.995s 00:20:21.458 sys 0m1.936s 00:20:21.458 00:24:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.458 00:24:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.458 ************************************ 00:20:21.458 END TEST nvmf_async_init 00:20:21.458 ************************************ 00:20:21.458 00:24:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:21.458 00:24:12 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:21.458 00:24:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:21.458 00:24:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.458 00:24:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:21.458 ************************************ 00:20:21.458 START TEST dma 00:20:21.458 ************************************ 00:20:21.458 00:24:12 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:21.458 * Looking for test storage... 00:20:21.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:21.458 00:24:12 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.458 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.458 00:24:12 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.458 00:24:12 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.458 00:24:12 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.458 00:24:12 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.458 00:24:12 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.459 00:24:12 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.459 00:24:12 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:21.459 00:24:12 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.459 00:24:12 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.459 00:24:12 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:21.459 00:24:12 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:21.459 00:20:21.459 real 0m0.063s 00:20:21.459 user 0m0.026s 00:20:21.459 sys 0m0.042s 00:20:21.459 00:24:12 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.459 00:24:12 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:21.459 ************************************ 00:20:21.459 END TEST dma 00:20:21.459 ************************************ 00:20:21.459 00:24:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:21.459 00:24:12 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:21.459 00:24:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:21.459 00:24:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.459 00:24:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:21.459 ************************************ 00:20:21.459 START TEST nvmf_identify 00:20:21.459 ************************************ 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:21.459 * Looking for test storage... 00:20:21.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:21.459 00:24:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.356 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:23.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:23.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:23.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:23.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:23.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:20:23.357 00:20:23.357 --- 10.0.0.2 ping statistics --- 00:20:23.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.357 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:23.357 00:20:23.357 --- 10.0.0.1 ping statistics --- 00:20:23.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.357 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2338673 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2338673 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2338673 ']' 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.357 00:24:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.357 [2024-07-16 00:24:14.823190] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:23.357 [2024-07-16 00:24:14.823265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.357 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.357 [2024-07-16 00:24:14.887835] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:23.615 [2024-07-16 00:24:15.001635] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.615 [2024-07-16 00:24:15.001701] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.615 [2024-07-16 00:24:15.001714] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.615 [2024-07-16 00:24:15.001739] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.615 [2024-07-16 00:24:15.001749] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.615 [2024-07-16 00:24:15.001905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.615 [2024-07-16 00:24:15.001980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.616 [2024-07-16 00:24:15.002041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.616 [2024-07-16 00:24:15.002044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 [2024-07-16 00:24:15.125481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 Malloc0 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 [2024-07-16 00:24:15.203013] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 [ 00:20:23.616 { 00:20:23.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:23.616 "subtype": "Discovery", 00:20:23.616 "listen_addresses": [ 00:20:23.616 { 00:20:23.616 "trtype": "TCP", 00:20:23.616 "adrfam": "IPv4", 00:20:23.616 "traddr": "10.0.0.2", 00:20:23.616 "trsvcid": "4420" 00:20:23.616 } 00:20:23.616 ], 00:20:23.616 "allow_any_host": true, 00:20:23.616 "hosts": [] 00:20:23.616 }, 00:20:23.616 { 00:20:23.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.616 "subtype": "NVMe", 00:20:23.616 "listen_addresses": [ 00:20:23.616 { 00:20:23.616 "trtype": "TCP", 00:20:23.616 "adrfam": "IPv4", 00:20:23.616 "traddr": "10.0.0.2", 00:20:23.616 "trsvcid": "4420" 00:20:23.616 } 00:20:23.616 ], 00:20:23.616 "allow_any_host": true, 00:20:23.616 "hosts": [], 00:20:23.616 "serial_number": "SPDK00000000000001", 00:20:23.616 "model_number": "SPDK bdev Controller", 00:20:23.616 "max_namespaces": 32, 00:20:23.616 "min_cntlid": 1, 00:20:23.616 "max_cntlid": 65519, 00:20:23.616 "namespaces": [ 00:20:23.616 { 00:20:23.616 "nsid": 1, 00:20:23.616 "bdev_name": "Malloc0", 00:20:23.616 "name": "Malloc0", 00:20:23.616 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:23.616 "eui64": "ABCDEF0123456789", 00:20:23.616 "uuid": "6b90be76-cc01-4ecf-8739-1129e0f0f312" 00:20:23.616 } 00:20:23.616 ] 00:20:23.616 } 00:20:23.616 ] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:23.876 [2024-07-16 00:24:15.245700] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:23.876 [2024-07-16 00:24:15.245743] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338817 ] 00:20:23.876 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.876 [2024-07-16 00:24:15.280313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:23.876 [2024-07-16 00:24:15.280375] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.876 [2024-07-16 00:24:15.280385] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.876 [2024-07-16 00:24:15.280400] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.876 [2024-07-16 00:24:15.280411] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.876 [2024-07-16 00:24:15.281181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:23.876 [2024-07-16 00:24:15.281250] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d7a6e0 0 00:20:23.876 [2024-07-16 00:24:15.290889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.876 [2024-07-16 00:24:15.290916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.876 [2024-07-16 00:24:15.290941] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.876 [2024-07-16 00:24:15.290951] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.876 [2024-07-16 00:24:15.291002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.876 [2024-07-16 00:24:15.291017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.876 [2024-07-16 00:24:15.291025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.876 [2024-07-16 00:24:15.291045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.876 [2024-07-16 00:24:15.291074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.876 [2024-07-16 00:24:15.298890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.876 [2024-07-16 00:24:15.298909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.876 [2024-07-16 00:24:15.298917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.876 [2024-07-16 00:24:15.298924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.876 [2024-07-16 00:24:15.298941] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.876 [2024-07-16 00:24:15.298953] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:23.876 [2024-07-16 00:24:15.298963] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:23.876 [2024-07-16 00:24:15.298988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.876 [2024-07-16 00:24:15.298997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.876 [2024-07-16 00:24:15.299003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.876 [2024-07-16 00:24:15.299014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.877 [2024-07-16 00:24:15.299039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.299239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.299254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.299261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.877 [2024-07-16 00:24:15.299278] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:23.877 [2024-07-16 00:24:15.299291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:23.877 [2024-07-16 00:24:15.299303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.877 [2024-07-16 00:24:15.299328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.877 [2024-07-16 00:24:15.299349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.299523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.299538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.299545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.877 [2024-07-16 00:24:15.299560] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:23.877 [2024-07-16 00:24:15.299575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.877 [2024-07-16 00:24:15.299592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.877 [2024-07-16 00:24:15.299617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.877 [2024-07-16 00:24:15.299639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.299784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.299799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.299806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.877 [2024-07-16 00:24:15.299822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.877 [2024-07-16 00:24:15.299839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.299854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.877 [2024-07-16 00:24:15.299865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.877 [2024-07-16 00:24:15.299892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.300035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.300047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.300054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.877 [2024-07-16 00:24:15.300069] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:23.877 [2024-07-16 00:24:15.300078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:23.877 [2024-07-16 00:24:15.300091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.877 [2024-07-16 00:24:15.300201] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:23.877 [2024-07-16 00:24:15.300209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.877 [2024-07-16 00:24:15.300225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.877 [2024-07-16 00:24:15.300265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.877 [2024-07-16 00:24:15.300286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.300475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.300488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.300494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.877 [2024-07-16 00:24:15.300513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.877 [2024-07-16 00:24:15.300531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.877 [2024-07-16 00:24:15.300556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.877 [2024-07-16 00:24:15.300577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.300721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.300736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.300743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.877 [2024-07-16 00:24:15.300757] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.877 [2024-07-16 00:24:15.300765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:23.877 [2024-07-16 00:24:15.300779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:23.877 [2024-07-16 00:24:15.300793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.877 [2024-07-16 00:24:15.300811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.300819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.877 [2024-07-16 00:24:15.300830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.877 [2024-07-16 00:24:15.300851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.301068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.877 [2024-07-16 00:24:15.301081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.877 [2024-07-16 00:24:15.301088] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.301095] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d7a6e0): datao=0, datal=4096, cccid=0 00:20:23.877 [2024-07-16 00:24:15.301103] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dda540) on tqpair(0x1d7a6e0): expected_datao=0, payload_size=4096 00:20:23.877 [2024-07-16 00:24:15.301110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.301122] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.301131] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.301172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.301183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.301189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.301196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.877 [2024-07-16 00:24:15.301208] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:23.877 [2024-07-16 00:24:15.301216] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:23.877 [2024-07-16 00:24:15.301224] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:23.877 [2024-07-16 00:24:15.301238] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:23.877 [2024-07-16 00:24:15.301247] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:23.877 [2024-07-16 00:24:15.301255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:23.877 [2024-07-16 00:24:15.301270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.877 [2024-07-16 00:24:15.301288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.301296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.877 [2024-07-16 00:24:15.301303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.877 [2024-07-16 00:24:15.301314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.877 [2024-07-16 00:24:15.301335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.877 [2024-07-16 00:24:15.301515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.877 [2024-07-16 00:24:15.301530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.877 [2024-07-16 00:24:15.301537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.878 [2024-07-16 00:24:15.301557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.301581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.878 [2024-07-16 00:24:15.301591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.301613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.878 [2024-07-16 00:24:15.301622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.301644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.878 [2024-07-16 00:24:15.301653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.301675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.878 [2024-07-16 00:24:15.301684] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.878 [2024-07-16 00:24:15.301703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.878 [2024-07-16 00:24:15.301717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.301724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.301753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.878 [2024-07-16 00:24:15.301776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda540, cid 0, qid 0 00:20:23.878 [2024-07-16 00:24:15.301787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda6c0, cid 1, qid 0 00:20:23.878 [2024-07-16 00:24:15.301795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda840, cid 2, qid 0 00:20:23.878 [2024-07-16 00:24:15.301817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.878 [2024-07-16 00:24:15.301830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddab40, cid 4, qid 0 00:20:23.878 [2024-07-16 00:24:15.302063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.878 [2024-07-16 00:24:15.302079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.878 [2024-07-16 00:24:15.302085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddab40) on tqpair=0x1d7a6e0 00:20:23.878 [2024-07-16 00:24:15.302101] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:23.878 [2024-07-16 00:24:15.302110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:23.878 [2024-07-16 00:24:15.302128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.302148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.878 [2024-07-16 00:24:15.302169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddab40, cid 4, qid 0 00:20:23.878 [2024-07-16 00:24:15.302358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.878 [2024-07-16 00:24:15.302370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.878 [2024-07-16 00:24:15.302377] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302383] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d7a6e0): datao=0, datal=4096, cccid=4 00:20:23.878 [2024-07-16 00:24:15.302390] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddab40) on tqpair(0x1d7a6e0): expected_datao=0, payload_size=4096 00:20:23.878 [2024-07-16 00:24:15.302398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302408] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302415] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.878 [2024-07-16 00:24:15.302470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.878 [2024-07-16 00:24:15.302476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddab40) on tqpair=0x1d7a6e0 00:20:23.878 [2024-07-16 00:24:15.302502] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:23.878 [2024-07-16 00:24:15.302544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.302566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.878 [2024-07-16 00:24:15.302578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.302595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.302605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.878 [2024-07-16 00:24:15.302631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddab40, cid 4, qid 0 00:20:23.878 [2024-07-16 00:24:15.302643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddacc0, cid 5, qid 0 00:20:23.878 [2024-07-16 00:24:15.302860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.878 [2024-07-16 00:24:15.306883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.878 [2024-07-16 00:24:15.306896] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.306903] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d7a6e0): datao=0, datal=1024, cccid=4 00:20:23.878 [2024-07-16 00:24:15.306910] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddab40) on tqpair(0x1d7a6e0): expected_datao=0, payload_size=1024 00:20:23.878 [2024-07-16 00:24:15.306917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.306927] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.306935] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.306943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.878 [2024-07-16 00:24:15.306952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.878 [2024-07-16 00:24:15.306958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.306965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddacc0) on tqpair=0x1d7a6e0 00:20:23.878 [2024-07-16 00:24:15.346895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.878 [2024-07-16 00:24:15.346915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.878 [2024-07-16 00:24:15.346923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.346930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddab40) on tqpair=0x1d7a6e0 00:20:23.878 [2024-07-16 00:24:15.346955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.346966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.346977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.878 [2024-07-16 00:24:15.347008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddab40, cid 4, qid 0 00:20:23.878 [2024-07-16 00:24:15.347210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.878 [2024-07-16 00:24:15.347223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.878 [2024-07-16 00:24:15.347230] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347236] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d7a6e0): datao=0, datal=3072, cccid=4 00:20:23.878 [2024-07-16 00:24:15.347248] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddab40) on tqpair(0x1d7a6e0): expected_datao=0, payload_size=3072 00:20:23.878 [2024-07-16 00:24:15.347256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347266] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347274] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.878 [2024-07-16 00:24:15.347333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.878 [2024-07-16 00:24:15.347340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddab40) on tqpair=0x1d7a6e0 00:20:23.878 [2024-07-16 00:24:15.347362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d7a6e0) 00:20:23.878 [2024-07-16 00:24:15.347388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.878 [2024-07-16 00:24:15.347416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddab40, cid 4, qid 0 00:20:23.878 [2024-07-16 00:24:15.347572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.878 [2024-07-16 00:24:15.347587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.878 [2024-07-16 00:24:15.347594] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347600] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d7a6e0): datao=0, datal=8, cccid=4 00:20:23.878 [2024-07-16 00:24:15.347607] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddab40) on tqpair(0x1d7a6e0): expected_datao=0, payload_size=8 00:20:23.878 [2024-07-16 00:24:15.347615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347624] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.878 [2024-07-16 00:24:15.347632] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.879 [2024-07-16 00:24:15.388039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.879 [2024-07-16 00:24:15.388057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.879 [2024-07-16 00:24:15.388065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.879 [2024-07-16 00:24:15.388072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddab40) on tqpair=0x1d7a6e0 00:20:23.879 ===================================================== 00:20:23.879 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:23.879 ===================================================== 00:20:23.879 Controller Capabilities/Features 00:20:23.879 ================================ 00:20:23.879 Vendor ID: 0000 00:20:23.879 Subsystem Vendor ID: 0000 00:20:23.879 Serial Number: .................... 00:20:23.879 Model Number: ........................................ 00:20:23.879 Firmware Version: 24.09 00:20:23.879 Recommended Arb Burst: 0 00:20:23.879 IEEE OUI Identifier: 00 00 00 00:20:23.879 Multi-path I/O 00:20:23.879 May have multiple subsystem ports: No 00:20:23.879 May have multiple controllers: No 00:20:23.879 Associated with SR-IOV VF: No 00:20:23.879 Max Data Transfer Size: 131072 00:20:23.879 Max Number of Namespaces: 0 00:20:23.879 Max Number of I/O Queues: 1024 00:20:23.879 NVMe Specification Version (VS): 1.3 00:20:23.879 NVMe Specification Version (Identify): 1.3 00:20:23.879 Maximum Queue Entries: 128 00:20:23.879 Contiguous Queues Required: Yes 00:20:23.879 Arbitration Mechanisms Supported 00:20:23.879 Weighted Round Robin: Not Supported 00:20:23.879 Vendor Specific: Not Supported 00:20:23.879 Reset Timeout: 15000 ms 00:20:23.879 Doorbell Stride: 4 bytes 00:20:23.879 NVM Subsystem Reset: Not Supported 00:20:23.879 Command Sets Supported 00:20:23.879 NVM Command Set: Supported 00:20:23.879 Boot Partition: Not Supported 00:20:23.879 Memory Page Size Minimum: 4096 bytes 00:20:23.879 Memory Page Size Maximum: 4096 bytes 00:20:23.879 Persistent Memory Region: Not Supported 00:20:23.879 Optional Asynchronous Events Supported 00:20:23.879 Namespace Attribute Notices: Not Supported 00:20:23.879 Firmware Activation Notices: Not Supported 00:20:23.879 ANA Change Notices: Not Supported 00:20:23.879 PLE Aggregate Log Change Notices: Not Supported 00:20:23.879 LBA Status Info Alert Notices: Not Supported 00:20:23.879 EGE Aggregate Log Change Notices: Not Supported 00:20:23.879 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.879 Zone Descriptor Change Notices: Not Supported 00:20:23.879 Discovery Log Change Notices: Supported 00:20:23.879 Controller Attributes 00:20:23.879 128-bit Host Identifier: Not Supported 00:20:23.879 Non-Operational Permissive Mode: Not Supported 00:20:23.879 NVM Sets: Not Supported 00:20:23.879 Read Recovery Levels: Not Supported 00:20:23.879 Endurance Groups: Not Supported 00:20:23.879 Predictable Latency Mode: Not Supported 00:20:23.879 Traffic Based Keep ALive: Not Supported 00:20:23.879 Namespace Granularity: Not Supported 00:20:23.879 SQ Associations: Not Supported 00:20:23.879 UUID List: Not Supported 00:20:23.879 Multi-Domain Subsystem: Not Supported 00:20:23.879 Fixed Capacity Management: Not Supported 00:20:23.879 Variable Capacity Management: Not Supported 00:20:23.879 Delete Endurance Group: Not Supported 00:20:23.879 Delete NVM Set: Not Supported 00:20:23.879 Extended LBA Formats Supported: Not Supported 00:20:23.879 Flexible Data Placement Supported: Not Supported 00:20:23.879 00:20:23.879 Controller Memory Buffer Support 00:20:23.879 ================================ 00:20:23.879 Supported: No 00:20:23.879 00:20:23.879 Persistent Memory Region Support 00:20:23.879 ================================ 00:20:23.879 Supported: No 00:20:23.879 00:20:23.879 Admin Command Set Attributes 00:20:23.879 ============================ 00:20:23.879 Security Send/Receive: Not Supported 00:20:23.879 Format NVM: Not Supported 00:20:23.879 Firmware Activate/Download: Not Supported 00:20:23.879 Namespace Management: Not Supported 00:20:23.879 Device Self-Test: Not Supported 00:20:23.879 Directives: Not Supported 00:20:23.879 NVMe-MI: Not Supported 00:20:23.879 Virtualization Management: Not Supported 00:20:23.879 Doorbell Buffer Config: Not Supported 00:20:23.879 Get LBA Status Capability: Not Supported 00:20:23.879 Command & Feature Lockdown Capability: Not Supported 00:20:23.879 Abort Command Limit: 1 00:20:23.879 Async Event Request Limit: 4 00:20:23.879 Number of Firmware Slots: N/A 00:20:23.879 Firmware Slot 1 Read-Only: N/A 00:20:23.879 Firmware Activation Without Reset: N/A 00:20:23.879 Multiple Update Detection Support: N/A 00:20:23.879 Firmware Update Granularity: No Information Provided 00:20:23.879 Per-Namespace SMART Log: No 00:20:23.879 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.879 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:23.879 Command Effects Log Page: Not Supported 00:20:23.879 Get Log Page Extended Data: Supported 00:20:23.879 Telemetry Log Pages: Not Supported 00:20:23.879 Persistent Event Log Pages: Not Supported 00:20:23.879 Supported Log Pages Log Page: May Support 00:20:23.879 Commands Supported & Effects Log Page: Not Supported 00:20:23.879 Feature Identifiers & Effects Log Page:May Support 00:20:23.879 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.879 Data Area 4 for Telemetry Log: Not Supported 00:20:23.879 Error Log Page Entries Supported: 128 00:20:23.879 Keep Alive: Not Supported 00:20:23.879 00:20:23.879 NVM Command Set Attributes 00:20:23.879 ========================== 00:20:23.879 Submission Queue Entry Size 00:20:23.879 Max: 1 00:20:23.879 Min: 1 00:20:23.879 Completion Queue Entry Size 00:20:23.879 Max: 1 00:20:23.879 Min: 1 00:20:23.879 Number of Namespaces: 0 00:20:23.879 Compare Command: Not Supported 00:20:23.879 Write Uncorrectable Command: Not Supported 00:20:23.879 Dataset Management Command: Not Supported 00:20:23.879 Write Zeroes Command: Not Supported 00:20:23.879 Set Features Save Field: Not Supported 00:20:23.879 Reservations: Not Supported 00:20:23.879 Timestamp: Not Supported 00:20:23.879 Copy: Not Supported 00:20:23.879 Volatile Write Cache: Not Present 00:20:23.879 Atomic Write Unit (Normal): 1 00:20:23.879 Atomic Write Unit (PFail): 1 00:20:23.879 Atomic Compare & Write Unit: 1 00:20:23.879 Fused Compare & Write: Supported 00:20:23.879 Scatter-Gather List 00:20:23.879 SGL Command Set: Supported 00:20:23.879 SGL Keyed: Supported 00:20:23.879 SGL Bit Bucket Descriptor: Not Supported 00:20:23.879 SGL Metadata Pointer: Not Supported 00:20:23.879 Oversized SGL: Not Supported 00:20:23.879 SGL Metadata Address: Not Supported 00:20:23.879 SGL Offset: Supported 00:20:23.879 Transport SGL Data Block: Not Supported 00:20:23.879 Replay Protected Memory Block: Not Supported 00:20:23.879 00:20:23.879 Firmware Slot Information 00:20:23.879 ========================= 00:20:23.879 Active slot: 0 00:20:23.879 00:20:23.879 00:20:23.879 Error Log 00:20:23.879 ========= 00:20:23.879 00:20:23.879 Active Namespaces 00:20:23.879 ================= 00:20:23.879 Discovery Log Page 00:20:23.879 ================== 00:20:23.879 Generation Counter: 2 00:20:23.879 Number of Records: 2 00:20:23.879 Record Format: 0 00:20:23.879 00:20:23.879 Discovery Log Entry 0 00:20:23.879 ---------------------- 00:20:23.879 Transport Type: 3 (TCP) 00:20:23.879 Address Family: 1 (IPv4) 00:20:23.879 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:23.879 Entry Flags: 00:20:23.879 Duplicate Returned Information: 1 00:20:23.879 Explicit Persistent Connection Support for Discovery: 1 00:20:23.879 Transport Requirements: 00:20:23.879 Secure Channel: Not Required 00:20:23.879 Port ID: 0 (0x0000) 00:20:23.879 Controller ID: 65535 (0xffff) 00:20:23.879 Admin Max SQ Size: 128 00:20:23.879 Transport Service Identifier: 4420 00:20:23.879 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:23.879 Transport Address: 10.0.0.2 00:20:23.879 Discovery Log Entry 1 00:20:23.879 ---------------------- 00:20:23.879 Transport Type: 3 (TCP) 00:20:23.879 Address Family: 1 (IPv4) 00:20:23.879 Subsystem Type: 2 (NVM Subsystem) 00:20:23.879 Entry Flags: 00:20:23.879 Duplicate Returned Information: 0 00:20:23.879 Explicit Persistent Connection Support for Discovery: 0 00:20:23.879 Transport Requirements: 00:20:23.880 Secure Channel: Not Required 00:20:23.880 Port ID: 0 (0x0000) 00:20:23.880 Controller ID: 65535 (0xffff) 00:20:23.880 Admin Max SQ Size: 128 00:20:23.880 Transport Service Identifier: 4420 00:20:23.880 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:23.880 Transport Address: 10.0.0.2 [2024-07-16 00:24:15.388191] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:23.880 [2024-07-16 00:24:15.388214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda540) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.388227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.880 [2024-07-16 00:24:15.388236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda6c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.388243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.880 [2024-07-16 00:24:15.388251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda840) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.388259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.880 [2024-07-16 00:24:15.388267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.388274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.880 [2024-07-16 00:24:15.388289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.388330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.388357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.388556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.388572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.388578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.388599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.388628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.388656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.388813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.388828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.388835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.388852] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:23.880 [2024-07-16 00:24:15.388861] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:23.880 [2024-07-16 00:24:15.388885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.388902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.388913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.388934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.389111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.389123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.389130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.389153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.389179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.389200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.389358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.389372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.389379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.389402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.389428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.389449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.389591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.389606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.389613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.389640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.389667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.389688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.389827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.389842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.389849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.389872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.389896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.389907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.389928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.390083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.390095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.390102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.390124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.390150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.390171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.390323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.390335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.390342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.390364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.390390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.390410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.390546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.390558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.390565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.390591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.390618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.390639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.880 [2024-07-16 00:24:15.390773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.880 [2024-07-16 00:24:15.390785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.880 [2024-07-16 00:24:15.390792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.880 [2024-07-16 00:24:15.390814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.880 [2024-07-16 00:24:15.390830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.880 [2024-07-16 00:24:15.390840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.880 [2024-07-16 00:24:15.390860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.881 [2024-07-16 00:24:15.394905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.881 [2024-07-16 00:24:15.394922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.881 [2024-07-16 00:24:15.394929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.881 [2024-07-16 00:24:15.394936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.881 [2024-07-16 00:24:15.394954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.881 [2024-07-16 00:24:15.394964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.881 [2024-07-16 00:24:15.394971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d7a6e0) 00:20:23.881 [2024-07-16 00:24:15.394981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.881 [2024-07-16 00:24:15.395004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dda9c0, cid 3, qid 0 00:20:23.881 [2024-07-16 00:24:15.395172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.881 [2024-07-16 00:24:15.395184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.881 [2024-07-16 00:24:15.395191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.881 [2024-07-16 00:24:15.395197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dda9c0) on tqpair=0x1d7a6e0 00:20:23.881 [2024-07-16 00:24:15.395211] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:23.881 00:20:23.881 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:23.881 [2024-07-16 00:24:15.430469] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:23.881 [2024-07-16 00:24:15.430512] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338819 ] 00:20:23.881 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.881 [2024-07-16 00:24:15.465657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:23.881 [2024-07-16 00:24:15.465710] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.881 [2024-07-16 00:24:15.465720] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.881 [2024-07-16 00:24:15.465734] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.881 [2024-07-16 00:24:15.465743] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.881 [2024-07-16 00:24:15.466224] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:23.881 [2024-07-16 00:24:15.466264] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x232c6e0 0 00:20:23.881 [2024-07-16 00:24:15.480890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.881 [2024-07-16 00:24:15.480912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.881 [2024-07-16 00:24:15.480921] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.881 [2024-07-16 00:24:15.480927] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.881 [2024-07-16 00:24:15.480975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.881 [2024-07-16 00:24:15.480987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.881 [2024-07-16 00:24:15.480994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:23.881 [2024-07-16 00:24:15.481008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.881 [2024-07-16 00:24:15.481035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.142 [2024-07-16 00:24:15.488902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.142 [2024-07-16 00:24:15.488919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.142 [2024-07-16 00:24:15.488926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.488933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.142 [2024-07-16 00:24:15.488961] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:24.142 [2024-07-16 00:24:15.488972] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:24.142 [2024-07-16 00:24:15.488982] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:24.142 [2024-07-16 00:24:15.488999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.142 [2024-07-16 00:24:15.489041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.142 [2024-07-16 00:24:15.489066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.142 [2024-07-16 00:24:15.489227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.142 [2024-07-16 00:24:15.489240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.142 [2024-07-16 00:24:15.489247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.142 [2024-07-16 00:24:15.489261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:24.142 [2024-07-16 00:24:15.489275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:24.142 [2024-07-16 00:24:15.489303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.142 [2024-07-16 00:24:15.489333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.142 [2024-07-16 00:24:15.489354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.142 [2024-07-16 00:24:15.489489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.142 [2024-07-16 00:24:15.489502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.142 [2024-07-16 00:24:15.489508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.142 [2024-07-16 00:24:15.489523] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:24.142 [2024-07-16 00:24:15.489536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:24.142 [2024-07-16 00:24:15.489548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.142 [2024-07-16 00:24:15.489572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.142 [2024-07-16 00:24:15.489592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.142 [2024-07-16 00:24:15.489726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.142 [2024-07-16 00:24:15.489739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.142 [2024-07-16 00:24:15.489745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.142 [2024-07-16 00:24:15.489759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:24.142 [2024-07-16 00:24:15.489775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.489791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.142 [2024-07-16 00:24:15.489801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.142 [2024-07-16 00:24:15.489822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.142 [2024-07-16 00:24:15.489993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.142 [2024-07-16 00:24:15.490008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.142 [2024-07-16 00:24:15.490014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.490021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.142 [2024-07-16 00:24:15.490029] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:24.142 [2024-07-16 00:24:15.490037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:24.142 [2024-07-16 00:24:15.490050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:24.142 [2024-07-16 00:24:15.490160] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:24.142 [2024-07-16 00:24:15.490168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:24.142 [2024-07-16 00:24:15.490199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.490208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.490215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.142 [2024-07-16 00:24:15.490225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.142 [2024-07-16 00:24:15.490248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.142 [2024-07-16 00:24:15.490389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.142 [2024-07-16 00:24:15.490405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.142 [2024-07-16 00:24:15.490412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.490419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.142 [2024-07-16 00:24:15.490427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:24.142 [2024-07-16 00:24:15.490445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.490454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.490460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.142 [2024-07-16 00:24:15.490471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.142 [2024-07-16 00:24:15.490492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.142 [2024-07-16 00:24:15.490630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.142 [2024-07-16 00:24:15.490646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.142 [2024-07-16 00:24:15.490652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.142 [2024-07-16 00:24:15.490659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.143 [2024-07-16 00:24:15.490666] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:24.143 [2024-07-16 00:24:15.490674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.490688] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:24.143 [2024-07-16 00:24:15.490705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.490720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.490727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.490738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.143 [2024-07-16 00:24:15.490760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.143 [2024-07-16 00:24:15.490962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.143 [2024-07-16 00:24:15.490978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.143 [2024-07-16 00:24:15.490985] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.490991] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=4096, cccid=0 00:20:24.143 [2024-07-16 00:24:15.490999] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238c540) on tqpair(0x232c6e0): expected_datao=0, payload_size=4096 00:20:24.143 [2024-07-16 00:24:15.491010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491030] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491040] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.143 [2024-07-16 00:24:15.491158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.143 [2024-07-16 00:24:15.491165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.143 [2024-07-16 00:24:15.491183] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:24.143 [2024-07-16 00:24:15.491191] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:24.143 [2024-07-16 00:24:15.491214] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:24.143 [2024-07-16 00:24:15.491221] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:24.143 [2024-07-16 00:24:15.491228] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:24.143 [2024-07-16 00:24:15.491236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.491250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.491267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.491293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:24.143 [2024-07-16 00:24:15.491315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.143 [2024-07-16 00:24:15.491462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.143 [2024-07-16 00:24:15.491477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.143 [2024-07-16 00:24:15.491483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.143 [2024-07-16 00:24:15.491500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.491523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.143 [2024-07-16 00:24:15.491533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.491555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.143 [2024-07-16 00:24:15.491564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.491586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.143 [2024-07-16 00:24:15.491599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.491621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.143 [2024-07-16 00:24:15.491630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.491648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.491661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.491679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.143 [2024-07-16 00:24:15.491703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c540, cid 0, qid 0 00:20:24.143 [2024-07-16 00:24:15.491714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c6c0, cid 1, qid 0 00:20:24.143 [2024-07-16 00:24:15.491722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c840, cid 2, qid 0 00:20:24.143 [2024-07-16 00:24:15.491730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.143 [2024-07-16 00:24:15.491738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cb40, cid 4, qid 0 00:20:24.143 [2024-07-16 00:24:15.491913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.143 [2024-07-16 00:24:15.491928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.143 [2024-07-16 00:24:15.491934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.491941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cb40) on tqpair=0x232c6e0 00:20:24.143 [2024-07-16 00:24:15.491949] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:24.143 [2024-07-16 00:24:15.491958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.491977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.491990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.492001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.492010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.492016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.492027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:24.143 [2024-07-16 00:24:15.492050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cb40, cid 4, qid 0 00:20:24.143 [2024-07-16 00:24:15.492214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.143 [2024-07-16 00:24:15.492227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.143 [2024-07-16 00:24:15.492233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.492240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cb40) on tqpair=0x232c6e0 00:20:24.143 [2024-07-16 00:24:15.492306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.492330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.492346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.492354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.492364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.143 [2024-07-16 00:24:15.492385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cb40, cid 4, qid 0 00:20:24.143 [2024-07-16 00:24:15.492538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.143 [2024-07-16 00:24:15.492551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.143 [2024-07-16 00:24:15.492557] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.492563] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=4096, cccid=4 00:20:24.143 [2024-07-16 00:24:15.492571] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238cb40) on tqpair(0x232c6e0): expected_datao=0, payload_size=4096 00:20:24.143 [2024-07-16 00:24:15.492578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.492603] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.492613] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.534036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.143 [2024-07-16 00:24:15.534055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.143 [2024-07-16 00:24:15.534062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.534069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cb40) on tqpair=0x232c6e0 00:20:24.143 [2024-07-16 00:24:15.534089] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:24.143 [2024-07-16 00:24:15.534110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.534129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:24.143 [2024-07-16 00:24:15.534143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.143 [2024-07-16 00:24:15.534151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232c6e0) 00:20:24.143 [2024-07-16 00:24:15.534163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.143 [2024-07-16 00:24:15.534186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cb40, cid 4, qid 0 00:20:24.144 [2024-07-16 00:24:15.534360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.144 [2024-07-16 00:24:15.534375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.144 [2024-07-16 00:24:15.534382] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.534388] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=4096, cccid=4 00:20:24.144 [2024-07-16 00:24:15.534396] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238cb40) on tqpair(0x232c6e0): expected_datao=0, payload_size=4096 00:20:24.144 [2024-07-16 00:24:15.534403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.534429] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.534438] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.576005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.144 [2024-07-16 00:24:15.576025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.144 [2024-07-16 00:24:15.576032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.576046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cb40) on tqpair=0x232c6e0 00:20:24.144 [2024-07-16 00:24:15.576070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.576091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.576106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.576115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.576126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.576150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cb40, cid 4, qid 0 00:20:24.144 [2024-07-16 00:24:15.576319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.144 [2024-07-16 00:24:15.576332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.144 [2024-07-16 00:24:15.576339] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.576346] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=4096, cccid=4 00:20:24.144 [2024-07-16 00:24:15.576353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238cb40) on tqpair(0x232c6e0): expected_datao=0, payload_size=4096 00:20:24.144 [2024-07-16 00:24:15.576360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.576386] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.576396] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.620895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.144 [2024-07-16 00:24:15.620914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.144 [2024-07-16 00:24:15.620921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.620928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cb40) on tqpair=0x232c6e0 00:20:24.144 [2024-07-16 00:24:15.620943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.620960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.620977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.620990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.620999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.621008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.621017] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:24.144 [2024-07-16 00:24:15.621025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:24.144 [2024-07-16 00:24:15.621034] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:24.144 [2024-07-16 00:24:15.621053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.621074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.621090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.621114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.144 [2024-07-16 00:24:15.621141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cb40, cid 4, qid 0 00:20:24.144 [2024-07-16 00:24:15.621154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238ccc0, cid 5, qid 0 00:20:24.144 [2024-07-16 00:24:15.621313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.144 [2024-07-16 00:24:15.621326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.144 [2024-07-16 00:24:15.621333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cb40) on tqpair=0x232c6e0 00:20:24.144 [2024-07-16 00:24:15.621349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.144 [2024-07-16 00:24:15.621358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.144 [2024-07-16 00:24:15.621365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238ccc0) on tqpair=0x232c6e0 00:20:24.144 [2024-07-16 00:24:15.621386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.621406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.621426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238ccc0, cid 5, qid 0 00:20:24.144 [2024-07-16 00:24:15.621560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.144 [2024-07-16 00:24:15.621572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.144 [2024-07-16 00:24:15.621578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238ccc0) on tqpair=0x232c6e0 00:20:24.144 [2024-07-16 00:24:15.621600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.621619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.621639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238ccc0, cid 5, qid 0 00:20:24.144 [2024-07-16 00:24:15.621786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.144 [2024-07-16 00:24:15.621801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.144 [2024-07-16 00:24:15.621808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238ccc0) on tqpair=0x232c6e0 00:20:24.144 [2024-07-16 00:24:15.621831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.621840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.621850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.621895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238ccc0, cid 5, qid 0 00:20:24.144 [2024-07-16 00:24:15.622035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.144 [2024-07-16 00:24:15.622048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.144 [2024-07-16 00:24:15.622058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238ccc0) on tqpair=0x232c6e0 00:20:24.144 [2024-07-16 00:24:15.622091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.622113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.622126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.622143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.622155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.622172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.622201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x232c6e0) 00:20:24.144 [2024-07-16 00:24:15.622218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.144 [2024-07-16 00:24:15.622239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238ccc0, cid 5, qid 0 00:20:24.144 [2024-07-16 00:24:15.622250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cb40, cid 4, qid 0 00:20:24.144 [2024-07-16 00:24:15.622258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238ce40, cid 6, qid 0 00:20:24.144 [2024-07-16 00:24:15.622266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cfc0, cid 7, qid 0 00:20:24.144 [2024-07-16 00:24:15.622484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.144 [2024-07-16 00:24:15.622496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.144 [2024-07-16 00:24:15.622503] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622509] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=8192, cccid=5 00:20:24.144 [2024-07-16 00:24:15.622516] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238ccc0) on tqpair(0x232c6e0): expected_datao=0, payload_size=8192 00:20:24.144 [2024-07-16 00:24:15.622523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622556] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622566] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.144 [2024-07-16 00:24:15.622574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.144 [2024-07-16 00:24:15.622583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.144 [2024-07-16 00:24:15.622589] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622596] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=512, cccid=4 00:20:24.145 [2024-07-16 00:24:15.622603] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238cb40) on tqpair(0x232c6e0): expected_datao=0, payload_size=512 00:20:24.145 [2024-07-16 00:24:15.622610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622619] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622626] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.145 [2024-07-16 00:24:15.622648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.145 [2024-07-16 00:24:15.622654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622660] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=512, cccid=6 00:20:24.145 [2024-07-16 00:24:15.622667] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238ce40) on tqpair(0x232c6e0): expected_datao=0, payload_size=512 00:20:24.145 [2024-07-16 00:24:15.622674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622683] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622691] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.145 [2024-07-16 00:24:15.622708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.145 [2024-07-16 00:24:15.622714] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622720] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232c6e0): datao=0, datal=4096, cccid=7 00:20:24.145 [2024-07-16 00:24:15.622727] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238cfc0) on tqpair(0x232c6e0): expected_datao=0, payload_size=4096 00:20:24.145 [2024-07-16 00:24:15.622734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622743] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622751] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.145 [2024-07-16 00:24:15.622770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.145 [2024-07-16 00:24:15.622777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238ccc0) on tqpair=0x232c6e0 00:20:24.145 [2024-07-16 00:24:15.622801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.145 [2024-07-16 00:24:15.622812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.145 [2024-07-16 00:24:15.622819] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cb40) on tqpair=0x232c6e0 00:20:24.145 [2024-07-16 00:24:15.622840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.145 [2024-07-16 00:24:15.622850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.145 [2024-07-16 00:24:15.622872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238ce40) on tqpair=0x232c6e0 00:20:24.145 [2024-07-16 00:24:15.622898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.145 [2024-07-16 00:24:15.622908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.145 [2024-07-16 00:24:15.622914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.145 [2024-07-16 00:24:15.622921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cfc0) on tqpair=0x232c6e0 00:20:24.145 ===================================================== 00:20:24.145 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.145 ===================================================== 00:20:24.145 Controller Capabilities/Features 00:20:24.145 ================================ 00:20:24.145 Vendor ID: 8086 00:20:24.145 Subsystem Vendor ID: 8086 00:20:24.145 Serial Number: SPDK00000000000001 00:20:24.145 Model Number: SPDK bdev Controller 00:20:24.145 Firmware Version: 24.09 00:20:24.145 Recommended Arb Burst: 6 00:20:24.145 IEEE OUI Identifier: e4 d2 5c 00:20:24.145 Multi-path I/O 00:20:24.145 May have multiple subsystem ports: Yes 00:20:24.145 May have multiple controllers: Yes 00:20:24.145 Associated with SR-IOV VF: No 00:20:24.145 Max Data Transfer Size: 131072 00:20:24.145 Max Number of Namespaces: 32 00:20:24.145 Max Number of I/O Queues: 127 00:20:24.145 NVMe Specification Version (VS): 1.3 00:20:24.145 NVMe Specification Version (Identify): 1.3 00:20:24.145 Maximum Queue Entries: 128 00:20:24.145 Contiguous Queues Required: Yes 00:20:24.145 Arbitration Mechanisms Supported 00:20:24.145 Weighted Round Robin: Not Supported 00:20:24.145 Vendor Specific: Not Supported 00:20:24.145 Reset Timeout: 15000 ms 00:20:24.145 Doorbell Stride: 4 bytes 00:20:24.145 NVM Subsystem Reset: Not Supported 00:20:24.145 Command Sets Supported 00:20:24.145 NVM Command Set: Supported 00:20:24.145 Boot Partition: Not Supported 00:20:24.145 Memory Page Size Minimum: 4096 bytes 00:20:24.145 Memory Page Size Maximum: 4096 bytes 00:20:24.145 Persistent Memory Region: Not Supported 00:20:24.145 Optional Asynchronous Events Supported 00:20:24.145 Namespace Attribute Notices: Supported 00:20:24.145 Firmware Activation Notices: Not Supported 00:20:24.145 ANA Change Notices: Not Supported 00:20:24.145 PLE Aggregate Log Change Notices: Not Supported 00:20:24.145 LBA Status Info Alert Notices: Not Supported 00:20:24.145 EGE Aggregate Log Change Notices: Not Supported 00:20:24.145 Normal NVM Subsystem Shutdown event: Not Supported 00:20:24.145 Zone Descriptor Change Notices: Not Supported 00:20:24.145 Discovery Log Change Notices: Not Supported 00:20:24.145 Controller Attributes 00:20:24.145 128-bit Host Identifier: Supported 00:20:24.145 Non-Operational Permissive Mode: Not Supported 00:20:24.145 NVM Sets: Not Supported 00:20:24.145 Read Recovery Levels: Not Supported 00:20:24.145 Endurance Groups: Not Supported 00:20:24.145 Predictable Latency Mode: Not Supported 00:20:24.145 Traffic Based Keep ALive: Not Supported 00:20:24.145 Namespace Granularity: Not Supported 00:20:24.145 SQ Associations: Not Supported 00:20:24.145 UUID List: Not Supported 00:20:24.145 Multi-Domain Subsystem: Not Supported 00:20:24.145 Fixed Capacity Management: Not Supported 00:20:24.145 Variable Capacity Management: Not Supported 00:20:24.145 Delete Endurance Group: Not Supported 00:20:24.145 Delete NVM Set: Not Supported 00:20:24.145 Extended LBA Formats Supported: Not Supported 00:20:24.145 Flexible Data Placement Supported: Not Supported 00:20:24.145 00:20:24.145 Controller Memory Buffer Support 00:20:24.145 ================================ 00:20:24.145 Supported: No 00:20:24.145 00:20:24.145 Persistent Memory Region Support 00:20:24.145 ================================ 00:20:24.145 Supported: No 00:20:24.145 00:20:24.145 Admin Command Set Attributes 00:20:24.145 ============================ 00:20:24.145 Security Send/Receive: Not Supported 00:20:24.145 Format NVM: Not Supported 00:20:24.145 Firmware Activate/Download: Not Supported 00:20:24.145 Namespace Management: Not Supported 00:20:24.145 Device Self-Test: Not Supported 00:20:24.145 Directives: Not Supported 00:20:24.145 NVMe-MI: Not Supported 00:20:24.145 Virtualization Management: Not Supported 00:20:24.145 Doorbell Buffer Config: Not Supported 00:20:24.145 Get LBA Status Capability: Not Supported 00:20:24.145 Command & Feature Lockdown Capability: Not Supported 00:20:24.145 Abort Command Limit: 4 00:20:24.145 Async Event Request Limit: 4 00:20:24.145 Number of Firmware Slots: N/A 00:20:24.145 Firmware Slot 1 Read-Only: N/A 00:20:24.145 Firmware Activation Without Reset: N/A 00:20:24.145 Multiple Update Detection Support: N/A 00:20:24.145 Firmware Update Granularity: No Information Provided 00:20:24.145 Per-Namespace SMART Log: No 00:20:24.145 Asymmetric Namespace Access Log Page: Not Supported 00:20:24.145 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:24.145 Command Effects Log Page: Supported 00:20:24.145 Get Log Page Extended Data: Supported 00:20:24.145 Telemetry Log Pages: Not Supported 00:20:24.145 Persistent Event Log Pages: Not Supported 00:20:24.145 Supported Log Pages Log Page: May Support 00:20:24.145 Commands Supported & Effects Log Page: Not Supported 00:20:24.145 Feature Identifiers & Effects Log Page:May Support 00:20:24.145 NVMe-MI Commands & Effects Log Page: May Support 00:20:24.145 Data Area 4 for Telemetry Log: Not Supported 00:20:24.145 Error Log Page Entries Supported: 128 00:20:24.145 Keep Alive: Supported 00:20:24.145 Keep Alive Granularity: 10000 ms 00:20:24.145 00:20:24.145 NVM Command Set Attributes 00:20:24.145 ========================== 00:20:24.145 Submission Queue Entry Size 00:20:24.145 Max: 64 00:20:24.145 Min: 64 00:20:24.145 Completion Queue Entry Size 00:20:24.145 Max: 16 00:20:24.145 Min: 16 00:20:24.145 Number of Namespaces: 32 00:20:24.145 Compare Command: Supported 00:20:24.145 Write Uncorrectable Command: Not Supported 00:20:24.145 Dataset Management Command: Supported 00:20:24.145 Write Zeroes Command: Supported 00:20:24.145 Set Features Save Field: Not Supported 00:20:24.145 Reservations: Supported 00:20:24.145 Timestamp: Not Supported 00:20:24.145 Copy: Supported 00:20:24.145 Volatile Write Cache: Present 00:20:24.145 Atomic Write Unit (Normal): 1 00:20:24.145 Atomic Write Unit (PFail): 1 00:20:24.145 Atomic Compare & Write Unit: 1 00:20:24.145 Fused Compare & Write: Supported 00:20:24.145 Scatter-Gather List 00:20:24.145 SGL Command Set: Supported 00:20:24.145 SGL Keyed: Supported 00:20:24.145 SGL Bit Bucket Descriptor: Not Supported 00:20:24.145 SGL Metadata Pointer: Not Supported 00:20:24.145 Oversized SGL: Not Supported 00:20:24.145 SGL Metadata Address: Not Supported 00:20:24.145 SGL Offset: Supported 00:20:24.145 Transport SGL Data Block: Not Supported 00:20:24.145 Replay Protected Memory Block: Not Supported 00:20:24.145 00:20:24.145 Firmware Slot Information 00:20:24.145 ========================= 00:20:24.145 Active slot: 1 00:20:24.145 Slot 1 Firmware Revision: 24.09 00:20:24.146 00:20:24.146 00:20:24.146 Commands Supported and Effects 00:20:24.146 ============================== 00:20:24.146 Admin Commands 00:20:24.146 -------------- 00:20:24.146 Get Log Page (02h): Supported 00:20:24.146 Identify (06h): Supported 00:20:24.146 Abort (08h): Supported 00:20:24.146 Set Features (09h): Supported 00:20:24.146 Get Features (0Ah): Supported 00:20:24.146 Asynchronous Event Request (0Ch): Supported 00:20:24.146 Keep Alive (18h): Supported 00:20:24.146 I/O Commands 00:20:24.146 ------------ 00:20:24.146 Flush (00h): Supported LBA-Change 00:20:24.146 Write (01h): Supported LBA-Change 00:20:24.146 Read (02h): Supported 00:20:24.146 Compare (05h): Supported 00:20:24.146 Write Zeroes (08h): Supported LBA-Change 00:20:24.146 Dataset Management (09h): Supported LBA-Change 00:20:24.146 Copy (19h): Supported LBA-Change 00:20:24.146 00:20:24.146 Error Log 00:20:24.146 ========= 00:20:24.146 00:20:24.146 Arbitration 00:20:24.146 =========== 00:20:24.146 Arbitration Burst: 1 00:20:24.146 00:20:24.146 Power Management 00:20:24.146 ================ 00:20:24.146 Number of Power States: 1 00:20:24.146 Current Power State: Power State #0 00:20:24.146 Power State #0: 00:20:24.146 Max Power: 0.00 W 00:20:24.146 Non-Operational State: Operational 00:20:24.146 Entry Latency: Not Reported 00:20:24.146 Exit Latency: Not Reported 00:20:24.146 Relative Read Throughput: 0 00:20:24.146 Relative Read Latency: 0 00:20:24.146 Relative Write Throughput: 0 00:20:24.146 Relative Write Latency: 0 00:20:24.146 Idle Power: Not Reported 00:20:24.146 Active Power: Not Reported 00:20:24.146 Non-Operational Permissive Mode: Not Supported 00:20:24.146 00:20:24.146 Health Information 00:20:24.146 ================== 00:20:24.146 Critical Warnings: 00:20:24.146 Available Spare Space: OK 00:20:24.146 Temperature: OK 00:20:24.146 Device Reliability: OK 00:20:24.146 Read Only: No 00:20:24.146 Volatile Memory Backup: OK 00:20:24.146 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:24.146 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:24.146 Available Spare: 0% 00:20:24.146 Available Spare Threshold: 0% 00:20:24.146 Life Percentage Used:[2024-07-16 00:24:15.623040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x232c6e0) 00:20:24.146 [2024-07-16 00:24:15.623063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.146 [2024-07-16 00:24:15.623086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238cfc0, cid 7, qid 0 00:20:24.146 [2024-07-16 00:24:15.623250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.146 [2024-07-16 00:24:15.623263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.146 [2024-07-16 00:24:15.623273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238cfc0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.623327] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:24.146 [2024-07-16 00:24:15.623346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c540) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.623357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.146 [2024-07-16 00:24:15.623366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c6c0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.623373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.146 [2024-07-16 00:24:15.623381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c840) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.623388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.146 [2024-07-16 00:24:15.623397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.623404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.146 [2024-07-16 00:24:15.623416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.146 [2024-07-16 00:24:15.623441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.146 [2024-07-16 00:24:15.623462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.146 [2024-07-16 00:24:15.623603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.146 [2024-07-16 00:24:15.623618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.146 [2024-07-16 00:24:15.623624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.623643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.146 [2024-07-16 00:24:15.623667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.146 [2024-07-16 00:24:15.623693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.146 [2024-07-16 00:24:15.623838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.146 [2024-07-16 00:24:15.623850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.146 [2024-07-16 00:24:15.623871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.623894] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:24.146 [2024-07-16 00:24:15.623903] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:24.146 [2024-07-16 00:24:15.623920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.623935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.146 [2024-07-16 00:24:15.623950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.146 [2024-07-16 00:24:15.623973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.146 [2024-07-16 00:24:15.624123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.146 [2024-07-16 00:24:15.624135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.146 [2024-07-16 00:24:15.624142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.624149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.624165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.624175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.624181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.146 [2024-07-16 00:24:15.624192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.146 [2024-07-16 00:24:15.624212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.146 [2024-07-16 00:24:15.624366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.146 [2024-07-16 00:24:15.624382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.146 [2024-07-16 00:24:15.624388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.624395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.624412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.624421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.624427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.146 [2024-07-16 00:24:15.624438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.146 [2024-07-16 00:24:15.624458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.146 [2024-07-16 00:24:15.624591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.146 [2024-07-16 00:24:15.624606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.146 [2024-07-16 00:24:15.624613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.146 [2024-07-16 00:24:15.624619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.146 [2024-07-16 00:24:15.624636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.147 [2024-07-16 00:24:15.624645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.147 [2024-07-16 00:24:15.624651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.147 [2024-07-16 00:24:15.624661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.147 [2024-07-16 00:24:15.624682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.147 [2024-07-16 00:24:15.624813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.147 [2024-07-16 00:24:15.624828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.147 [2024-07-16 00:24:15.624835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.147 [2024-07-16 00:24:15.624841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.147 [2024-07-16 00:24:15.624872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.147 [2024-07-16 00:24:15.628895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.147 [2024-07-16 00:24:15.628903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232c6e0) 00:20:24.147 [2024-07-16 00:24:15.628914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.147 [2024-07-16 00:24:15.628957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238c9c0, cid 3, qid 0 00:20:24.147 [2024-07-16 00:24:15.629114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.147 [2024-07-16 00:24:15.629126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.147 [2024-07-16 00:24:15.629133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.147 [2024-07-16 00:24:15.629140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238c9c0) on tqpair=0x232c6e0 00:20:24.147 [2024-07-16 00:24:15.629153] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:24.147 0% 00:20:24.147 Data Units Read: 0 00:20:24.147 Data Units Written: 0 00:20:24.147 Host Read Commands: 0 00:20:24.147 Host Write Commands: 0 00:20:24.147 Controller Busy Time: 0 minutes 00:20:24.147 Power Cycles: 0 00:20:24.147 Power On Hours: 0 hours 00:20:24.147 Unsafe Shutdowns: 0 00:20:24.147 Unrecoverable Media Errors: 0 00:20:24.147 Lifetime Error Log Entries: 0 00:20:24.147 Warning Temperature Time: 0 minutes 00:20:24.147 Critical Temperature Time: 0 minutes 00:20:24.147 00:20:24.147 Number of Queues 00:20:24.147 ================ 00:20:24.147 Number of I/O Submission Queues: 127 00:20:24.147 Number of I/O Completion Queues: 127 00:20:24.147 00:20:24.147 Active Namespaces 00:20:24.147 ================= 00:20:24.147 Namespace ID:1 00:20:24.147 Error Recovery Timeout: Unlimited 00:20:24.147 Command Set Identifier: NVM (00h) 00:20:24.147 Deallocate: Supported 00:20:24.147 Deallocated/Unwritten Error: Not Supported 00:20:24.147 Deallocated Read Value: Unknown 00:20:24.147 Deallocate in Write Zeroes: Not Supported 00:20:24.147 Deallocated Guard Field: 0xFFFF 00:20:24.147 Flush: Supported 00:20:24.147 Reservation: Supported 00:20:24.147 Namespace Sharing Capabilities: Multiple Controllers 00:20:24.147 Size (in LBAs): 131072 (0GiB) 00:20:24.147 Capacity (in LBAs): 131072 (0GiB) 00:20:24.147 Utilization (in LBAs): 131072 (0GiB) 00:20:24.147 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:24.147 EUI64: ABCDEF0123456789 00:20:24.147 UUID: 6b90be76-cc01-4ecf-8739-1129e0f0f312 00:20:24.147 Thin Provisioning: Not Supported 00:20:24.147 Per-NS Atomic Units: Yes 00:20:24.147 Atomic Boundary Size (Normal): 0 00:20:24.147 Atomic Boundary Size (PFail): 0 00:20:24.147 Atomic Boundary Offset: 0 00:20:24.147 Maximum Single Source Range Length: 65535 00:20:24.147 Maximum Copy Length: 65535 00:20:24.147 Maximum Source Range Count: 1 00:20:24.147 NGUID/EUI64 Never Reused: No 00:20:24.147 Namespace Write Protected: No 00:20:24.147 Number of LBA Formats: 1 00:20:24.147 Current LBA Format: LBA Format #00 00:20:24.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:24.147 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.147 rmmod nvme_tcp 00:20:24.147 rmmod nvme_fabrics 00:20:24.147 rmmod nvme_keyring 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2338673 ']' 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2338673 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2338673 ']' 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2338673 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2338673 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2338673' 00:20:24.147 killing process with pid 2338673 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2338673 00:20:24.147 00:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2338673 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.713 00:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.614 00:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.614 00:20:26.614 real 0m5.366s 00:20:26.614 user 0m4.599s 00:20:26.614 sys 0m1.778s 00:20:26.614 00:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.614 00:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.614 ************************************ 00:20:26.614 END TEST nvmf_identify 00:20:26.614 ************************************ 00:20:26.614 00:24:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:26.614 00:24:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:26.614 00:24:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.614 00:24:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.614 00:24:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.614 ************************************ 00:20:26.614 START TEST nvmf_perf 00:20:26.614 ************************************ 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:26.614 * Looking for test storage... 00:20:26.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.614 00:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.615 00:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:28.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:28.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:28.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:28.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.548 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:20:28.806 00:20:28.806 --- 10.0.0.2 ping statistics --- 00:20:28.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.806 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:20:28.806 00:20:28.806 --- 10.0.0.1 ping statistics --- 00:20:28.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.806 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2340757 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2340757 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2340757 ']' 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.806 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.806 [2024-07-16 00:24:20.309502] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:28.806 [2024-07-16 00:24:20.309576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.806 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.806 [2024-07-16 00:24:20.372209] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.064 [2024-07-16 00:24:20.480559] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.064 [2024-07-16 00:24:20.480612] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.064 [2024-07-16 00:24:20.480640] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.064 [2024-07-16 00:24:20.480651] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.064 [2024-07-16 00:24:20.480661] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.064 [2024-07-16 00:24:20.480744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.064 [2024-07-16 00:24:20.480810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.064 [2024-07-16 00:24:20.480857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.064 [2024-07-16 00:24:20.480860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:29.064 00:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:32.334 00:24:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:32.334 00:24:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:32.592 00:24:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:32.592 00:24:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:32.849 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:32.849 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:32.849 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:32.850 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:32.850 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.107 [2024-07-16 00:24:24.479468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.107 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.365 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:33.365 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:33.645 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:33.645 00:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:33.645 00:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.904 [2024-07-16 00:24:25.475099] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.904 00:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:34.162 00:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:34.162 00:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:34.162 00:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:34.162 00:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:35.532 Initializing NVMe Controllers 00:20:35.532 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:35.532 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:35.532 Initialization complete. Launching workers. 00:20:35.532 ======================================================== 00:20:35.532 Latency(us) 00:20:35.532 Device Information : IOPS MiB/s Average min max 00:20:35.532 PCIE (0000:88:00.0) NSID 1 from core 0: 84079.74 328.44 380.14 34.55 4539.52 00:20:35.532 ======================================================== 00:20:35.532 Total : 84079.74 328.44 380.14 34.55 4539.52 00:20:35.532 00:20:35.532 00:24:26 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.532 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.903 Initializing NVMe Controllers 00:20:36.903 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.903 Initialization complete. Launching workers. 00:20:36.903 ======================================================== 00:20:36.903 Latency(us) 00:20:36.903 Device Information : IOPS MiB/s Average min max 00:20:36.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12994.38 205.27 45595.57 00:20:36.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 74.00 0.29 13587.27 4970.36 51898.80 00:20:36.903 ======================================================== 00:20:36.903 Total : 154.00 0.60 13279.27 205.27 51898.80 00:20:36.903 00:20:36.903 00:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.903 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.276 Initializing NVMe Controllers 00:20:38.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:38.276 Initialization complete. Launching workers. 00:20:38.276 ======================================================== 00:20:38.276 Latency(us) 00:20:38.276 Device Information : IOPS MiB/s Average min max 00:20:38.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8200.05 32.03 3907.85 487.97 9871.51 00:20:38.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3858.14 15.07 8351.79 4415.42 18859.25 00:20:38.277 ======================================================== 00:20:38.277 Total : 12058.19 47.10 5329.73 487.97 18859.25 00:20:38.277 00:20:38.277 00:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:38.277 00:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:38.277 00:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.277 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.807 Initializing NVMe Controllers 00:20:40.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.807 Controller IO queue size 128, less than required. 00:20:40.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.807 Controller IO queue size 128, less than required. 00:20:40.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.807 Initialization complete. Launching workers. 00:20:40.807 ======================================================== 00:20:40.807 Latency(us) 00:20:40.807 Device Information : IOPS MiB/s Average min max 00:20:40.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 936.17 234.04 141090.57 90190.52 218008.90 00:20:40.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 564.70 141.17 232024.19 71736.80 354163.92 00:20:40.807 ======================================================== 00:20:40.807 Total : 1500.87 375.22 175304.11 71736.80 354163.92 00:20:40.807 00:20:40.807 00:24:32 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:40.807 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.064 No valid NVMe controllers or AIO or URING devices found 00:20:41.064 Initializing NVMe Controllers 00:20:41.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.064 Controller IO queue size 128, less than required. 00:20:41.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:41.064 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:41.064 Controller IO queue size 128, less than required. 00:20:41.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:41.064 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:41.064 WARNING: Some requested NVMe devices were skipped 00:20:41.064 00:24:32 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:41.064 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.590 Initializing NVMe Controllers 00:20:43.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.590 Controller IO queue size 128, less than required. 00:20:43.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:43.590 Controller IO queue size 128, less than required. 00:20:43.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:43.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:43.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:43.590 Initialization complete. Launching workers. 00:20:43.590 00:20:43.590 ==================== 00:20:43.590 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:43.590 TCP transport: 00:20:43.590 polls: 36618 00:20:43.590 idle_polls: 13295 00:20:43.590 sock_completions: 23323 00:20:43.590 nvme_completions: 3643 00:20:43.590 submitted_requests: 5542 00:20:43.590 queued_requests: 1 00:20:43.590 00:20:43.590 ==================== 00:20:43.590 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:43.590 TCP transport: 00:20:43.590 polls: 38496 00:20:43.590 idle_polls: 14142 00:20:43.591 sock_completions: 24354 00:20:43.591 nvme_completions: 3327 00:20:43.591 submitted_requests: 5004 00:20:43.591 queued_requests: 1 00:20:43.591 ======================================================== 00:20:43.591 Latency(us) 00:20:43.591 Device Information : IOPS MiB/s Average min max 00:20:43.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 908.60 227.15 144748.24 88113.02 251729.56 00:20:43.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 829.76 207.44 157375.74 64638.00 244516.51 00:20:43.591 ======================================================== 00:20:43.591 Total : 1738.36 434.59 150775.66 64638.00 251729.56 00:20:43.591 00:20:43.591 00:24:35 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:43.591 00:24:35 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.848 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.848 rmmod nvme_tcp 00:20:43.848 rmmod nvme_fabrics 00:20:44.107 rmmod nvme_keyring 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2340757 ']' 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2340757 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2340757 ']' 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2340757 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2340757 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2340757' 00:20:44.107 killing process with pid 2340757 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2340757 00:20:44.107 00:24:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2340757 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.063 00:24:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.965 00:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:47.965 00:20:47.965 real 0m21.108s 00:20:47.965 user 1m6.033s 00:20:47.965 sys 0m4.714s 00:20:47.965 00:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:47.965 00:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 ************************************ 00:20:47.965 END TEST nvmf_perf 00:20:47.966 ************************************ 00:20:47.966 00:24:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:47.966 00:24:39 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:47.966 00:24:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:47.966 00:24:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:47.966 00:24:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:47.966 ************************************ 00:20:47.966 START TEST nvmf_fio_host 00:20:47.966 ************************************ 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:47.966 * Looking for test storage... 00:20:47.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:47.966 00:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:49.865 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:49.865 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.865 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:49.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:49.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:20:49.866 00:20:49.866 --- 10.0.0.2 ping statistics --- 00:20:49.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.866 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:20:49.866 00:20:49.866 --- 10.0.0.1 ping statistics --- 00:20:49.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.866 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2344714 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2344714 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2344714 ']' 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.866 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.866 [2024-07-16 00:24:41.391513] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:20:49.866 [2024-07-16 00:24:41.391587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.866 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.866 [2024-07-16 00:24:41.453997] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.123 [2024-07-16 00:24:41.562560] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.123 [2024-07-16 00:24:41.562610] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.123 [2024-07-16 00:24:41.562638] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.123 [2024-07-16 00:24:41.562649] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.123 [2024-07-16 00:24:41.562658] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.123 [2024-07-16 00:24:41.562809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.123 [2024-07-16 00:24:41.562894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.123 [2024-07-16 00:24:41.562921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.123 [2024-07-16 00:24:41.562923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.123 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.123 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:50.123 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:50.379 [2024-07-16 00:24:41.964643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.379 00:24:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:50.379 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.379 00:24:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.635 00:24:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:50.891 Malloc1 00:20:50.891 00:24:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.147 00:24:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:51.403 00:24:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.659 [2024-07-16 00:24:43.137276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.659 00:24:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:51.915 00:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.171 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:52.171 fio-3.35 00:20:52.171 Starting 1 thread 00:20:52.171 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.698 00:20:54.698 test: (groupid=0, jobs=1): err= 0: pid=2345069: Tue Jul 16 00:24:45 2024 00:20:54.698 read: IOPS=8918, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2006msec) 00:20:54.698 slat (nsec): min=1915, max=167569, avg=2524.46, stdev=1946.01 00:20:54.698 clat (usec): min=3498, max=14081, avg=7953.07, stdev=579.68 00:20:54.698 lat (usec): min=3528, max=14083, avg=7955.60, stdev=579.55 00:20:54.698 clat percentiles (usec): 00:20:54.698 | 1.00th=[ 6652], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:20:54.698 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:20:54.698 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8848], 00:20:54.698 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11076], 99.95th=[12387], 00:20:54.698 | 99.99th=[13435] 00:20:54.698 bw ( KiB/s): min=34816, max=36136, per=99.91%, avg=35642.00, stdev=572.11, samples=4 00:20:54.698 iops : min= 8704, max= 9034, avg=8910.50, stdev=143.03, samples=4 00:20:54.698 write: IOPS=8931, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2006msec); 0 zone resets 00:20:54.698 slat (usec): min=2, max=143, avg= 2.67, stdev= 1.51 00:20:54.698 clat (usec): min=1482, max=12409, avg=6356.22, stdev=517.83 00:20:54.698 lat (usec): min=1491, max=12411, avg=6358.90, stdev=517.78 00:20:54.698 clat percentiles (usec): 00:20:54.698 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5997], 00:20:54.698 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:20:54.698 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:20:54.698 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[10421], 99.95th=[10945], 00:20:54.698 | 99.99th=[12387] 00:20:54.698 bw ( KiB/s): min=35520, max=35904, per=99.98%, avg=35718.00, stdev=162.56, samples=4 00:20:54.698 iops : min= 8880, max= 8976, avg=8929.50, stdev=40.64, samples=4 00:20:54.698 lat (msec) : 2=0.01%, 4=0.11%, 10=99.73%, 20=0.15% 00:20:54.698 cpu : usr=57.16%, sys=35.76%, ctx=74, majf=0, minf=33 00:20:54.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:54.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.698 issued rwts: total=17890,17917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.698 00:20:54.698 Run status group 0 (all jobs): 00:20:54.698 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2006-2006msec 00:20:54.698 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2006-2006msec 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:54.698 00:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.698 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:54.698 fio-3.35 00:20:54.698 Starting 1 thread 00:20:54.698 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.225 00:20:57.225 test: (groupid=0, jobs=1): err= 0: pid=2345402: Tue Jul 16 00:24:48 2024 00:20:57.225 read: IOPS=8186, BW=128MiB/s (134MB/s)(257MiB/2007msec) 00:20:57.225 slat (nsec): min=2813, max=93422, avg=3831.54, stdev=1518.78 00:20:57.225 clat (usec): min=3251, max=18104, avg=9502.95, stdev=2497.49 00:20:57.225 lat (usec): min=3255, max=18107, avg=9506.78, stdev=2497.49 00:20:57.225 clat percentiles (usec): 00:20:57.225 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7308], 00:20:57.225 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:20:57.225 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12780], 95.00th=[13960], 00:20:57.225 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17695], 99.95th=[17957], 00:20:57.225 | 99.99th=[17957] 00:20:57.225 bw ( KiB/s): min=58816, max=72128, per=49.94%, avg=65408.00, stdev=5975.07, samples=4 00:20:57.225 iops : min= 3676, max= 4508, avg=4088.00, stdev=373.44, samples=4 00:20:57.225 write: IOPS=4631, BW=72.4MiB/s (75.9MB/s)(134MiB/1853msec); 0 zone resets 00:20:57.225 slat (usec): min=30, max=147, avg=33.80, stdev= 4.83 00:20:57.225 clat (usec): min=5893, max=19439, avg=11040.86, stdev=2005.06 00:20:57.225 lat (usec): min=5925, max=19471, avg=11074.66, stdev=2005.36 00:20:57.225 clat percentiles (usec): 00:20:57.225 | 1.00th=[ 7504], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9241], 00:20:57.225 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:20:57.225 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13960], 95.00th=[14877], 00:20:57.225 | 99.00th=[16319], 99.50th=[16909], 99.90th=[18220], 99.95th=[18482], 00:20:57.225 | 99.99th=[19530] 00:20:57.225 bw ( KiB/s): min=61440, max=75008, per=91.96%, avg=68152.00, stdev=6002.96, samples=4 00:20:57.225 iops : min= 3840, max= 4688, avg=4259.50, stdev=375.18, samples=4 00:20:57.225 lat (msec) : 4=0.16%, 10=51.14%, 20=48.69% 00:20:57.225 cpu : usr=73.88%, sys=22.43%, ctx=23, majf=0, minf=47 00:20:57.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:20:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.225 issued rwts: total=16430,8583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.225 00:20:57.225 Run status group 0 (all jobs): 00:20:57.225 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2007-2007msec 00:20:57.225 WRITE: bw=72.4MiB/s (75.9MB/s), 72.4MiB/s-72.4MiB/s (75.9MB/s-75.9MB/s), io=134MiB (141MB), run=1853-1853msec 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.225 rmmod nvme_tcp 00:20:57.225 rmmod nvme_fabrics 00:20:57.225 rmmod nvme_keyring 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2344714 ']' 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2344714 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2344714 ']' 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2344714 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344714 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344714' 00:20:57.225 killing process with pid 2344714 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2344714 00:20:57.225 00:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2344714 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.792 00:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.692 00:24:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.692 00:20:59.692 real 0m11.845s 00:20:59.692 user 0m35.131s 00:20:59.692 sys 0m3.960s 00:20:59.692 00:24:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.692 00:24:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.692 ************************************ 00:20:59.692 END TEST nvmf_fio_host 00:20:59.692 ************************************ 00:20:59.692 00:24:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:59.692 00:24:51 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:59.692 00:24:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:59.692 00:24:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.692 00:24:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:59.692 ************************************ 00:20:59.692 START TEST nvmf_failover 00:20:59.693 ************************************ 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:59.693 * Looking for test storage... 00:20:59.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.693 00:24:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:02.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:02.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:02.224 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.224 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:02.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:21:02.225 00:21:02.225 --- 10.0.0.2 ping statistics --- 00:21:02.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.225 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:21:02.225 00:21:02.225 --- 10.0.0.1 ping statistics --- 00:21:02.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.225 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2347713 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2347713 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2347713 ']' 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.225 [2024-07-16 00:24:53.458914] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:21:02.225 [2024-07-16 00:24:53.459010] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.225 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.225 [2024-07-16 00:24:53.527163] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:02.225 [2024-07-16 00:24:53.647871] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.225 [2024-07-16 00:24:53.647937] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.225 [2024-07-16 00:24:53.647953] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.225 [2024-07-16 00:24:53.647966] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.225 [2024-07-16 00:24:53.647978] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.225 [2024-07-16 00:24:53.648083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.225 [2024-07-16 00:24:53.649900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.225 [2024-07-16 00:24:53.649912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.225 00:24:53 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:02.483 [2024-07-16 00:24:54.005677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.483 00:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:02.789 Malloc0 00:21:02.789 00:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.048 00:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.305 00:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.562 [2024-07-16 00:24:55.014738] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.562 00:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:03.819 [2024-07-16 00:24:55.255400] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:03.819 00:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:04.077 [2024-07-16 00:24:55.504268] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2347919 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2347919 /var/tmp/bdevperf.sock 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2347919 ']' 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.077 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:04.335 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.335 00:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:04.335 00:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.592 NVMe0n1 00:21:04.592 00:24:56 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.157 00:21:05.157 00:24:56 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2348021 00:21:05.157 00:24:56 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.157 00:24:56 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:06.088 00:24:57 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.347 [2024-07-16 00:24:57.804888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.804956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.804985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.804998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.347 [2024-07-16 00:24:57.805272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.805998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 [2024-07-16 00:24:57.806212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14700c0 is same with the state(5) to be set 00:21:06.348 00:24:57 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:09.631 00:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:09.631 00:21:09.631 00:25:01 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:09.888 [2024-07-16 00:25:01.402614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.888 [2024-07-16 00:25:01.402792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.402992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 [2024-07-16 00:25:01.403287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1471090 is same with the state(5) to be set 00:21:09.889 00:25:01 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:13.180 00:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.180 [2024-07-16 00:25:04.671176] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.180 00:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:14.113 00:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:14.372 [2024-07-16 00:25:05.931563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.931992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.372 [2024-07-16 00:25:05.932379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 [2024-07-16 00:25:05.932744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14720e0 is same with the state(5) to be set 00:21:14.373 00:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2348021 00:21:20.939 0 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2347919 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2347919 ']' 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2347919 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2347919 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2347919' 00:21:20.939 killing process with pid 2347919 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2347919 00:21:20.939 00:25:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2347919 00:21:20.939 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:20.939 [2024-07-16 00:24:55.568023] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:21:20.939 [2024-07-16 00:24:55.568113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347919 ] 00:21:20.939 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.939 [2024-07-16 00:24:55.631759] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.939 [2024-07-16 00:24:55.745629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.939 Running I/O for 15 seconds... 00:21:20.939 [2024-07-16 00:24:57.807358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.807978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.807993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.808007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.808022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.808035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.808050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.808064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.808079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.939 [2024-07-16 00:24:57.808092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.939 [2024-07-16 00:24:57.808107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.808496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.808977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.808992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.809005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.809033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.809061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.940 [2024-07-16 00:24:57.809342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.809371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.940 [2024-07-16 00:24:57.809409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.940 [2024-07-16 00:24:57.809424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.809975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.809989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.941 [2024-07-16 00:24:57.810702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.941 [2024-07-16 00:24:57.810717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:24:57.810730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.810745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:24:57.810759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.810774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:24:57.810787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.810820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.810843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80616 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.810856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.810884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.810898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.810910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80624 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.810923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.810936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.810947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.810958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80632 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.810971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.810984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.810995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80640 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80648 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80656 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80664 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80672 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80680 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80688 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80696 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80704 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.942 [2024-07-16 00:24:57.811599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.942 [2024-07-16 00:24:57.811613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:21:20.942 [2024-07-16 00:24:57.811627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811685] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d20380 was disconnected and freed. reset controller. 00:21:20.942 [2024-07-16 00:24:57.811703] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:20.942 [2024-07-16 00:24:57.811738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.942 [2024-07-16 00:24:57.811756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.942 [2024-07-16 00:24:57.811784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.942 [2024-07-16 00:24:57.811810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.942 [2024-07-16 00:24:57.811836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:24:57.811850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.942 [2024-07-16 00:24:57.811915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfa2e0 (9): Bad file descriptor 00:21:20.942 [2024-07-16 00:24:57.815210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:20.942 [2024-07-16 00:24:57.844028] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.942 [2024-07-16 00:25:01.404697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:25:01.404743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:25:01.404773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:25:01.404790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:25:01.404806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:25:01.404820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:25:01.404836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:25:01.404850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:25:01.404865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:25:01.404886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:25:01.404903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:25:01.404922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.942 [2024-07-16 00:25:01.404938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.942 [2024-07-16 00:25:01.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.404967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.404980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.404995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.943 [2024-07-16 00:25:01.405469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.943 [2024-07-16 00:25:01.405497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.943 [2024-07-16 00:25:01.405526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.943 [2024-07-16 00:25:01.405555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.943 [2024-07-16 00:25:01.405584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.943 [2024-07-16 00:25:01.405612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.943 [2024-07-16 00:25:01.405648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.943 [2024-07-16 00:25:01.405981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.943 [2024-07-16 00:25:01.405995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.406752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.406983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.944 [2024-07-16 00:25:01.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.944 [2024-07-16 00:25:01.407246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.944 [2024-07-16 00:25:01.407260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.407975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.407989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.945 [2024-07-16 00:25:01.408434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.945 [2024-07-16 00:25:01.408479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76272 len:8 PRP1 0x0 PRP2 0x0 00:21:20.945 [2024-07-16 00:25:01.408493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.945 [2024-07-16 00:25:01.408522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.945 [2024-07-16 00:25:01.408533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76280 len:8 PRP1 0x0 PRP2 0x0 00:21:20.945 [2024-07-16 00:25:01.408546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.945 [2024-07-16 00:25:01.408560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.945 [2024-07-16 00:25:01.408571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.945 [2024-07-16 00:25:01.408582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76288 len:8 PRP1 0x0 PRP2 0x0 00:21:20.946 [2024-07-16 00:25:01.408595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:01.408609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.946 [2024-07-16 00:25:01.408619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.946 [2024-07-16 00:25:01.408630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76296 len:8 PRP1 0x0 PRP2 0x0 00:21:20.946 [2024-07-16 00:25:01.408643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:01.408705] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ec5080 was disconnected and freed. reset controller. 00:21:20.946 [2024-07-16 00:25:01.408723] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:20.946 [2024-07-16 00:25:01.408758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.946 [2024-07-16 00:25:01.408779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:01.408795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.946 [2024-07-16 00:25:01.408808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:01.408822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.946 [2024-07-16 00:25:01.408835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:01.408848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.946 [2024-07-16 00:25:01.408870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:01.408892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.946 [2024-07-16 00:25:01.408946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfa2e0 (9): Bad file descriptor 00:21:20.946 [2024-07-16 00:25:01.412155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:20.946 [2024-07-16 00:25:01.602794] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.946 [2024-07-16 00:25:05.933933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.933981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.946 [2024-07-16 00:25:05.934282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.946 [2024-07-16 00:25:05.934982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.946 [2024-07-16 00:25:05.934998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.935978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.935994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.936007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.936022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.936036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.936050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.936064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.947 [2024-07-16 00:25:05.936083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.947 [2024-07-16 00:25:05.936097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.948 [2024-07-16 00:25:05.936511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.948 [2024-07-16 00:25:05.936538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.948 [2024-07-16 00:25:05.936566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.948 [2024-07-16 00:25:05.936593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.948 [2024-07-16 00:25:05.936621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.948 [2024-07-16 00:25:05.936648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.948 [2024-07-16 00:25:05.936675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.936977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.936998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.948 [2024-07-16 00:25:05.937356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.948 [2024-07-16 00:25:05.937369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.949 [2024-07-16 00:25:05.937397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.949 [2024-07-16 00:25:05.937424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.949 [2024-07-16 00:25:05.937453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.949 [2024-07-16 00:25:05.937481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.949 [2024-07-16 00:25:05.937508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.949 [2024-07-16 00:25:05.937536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.949 [2024-07-16 00:25:05.937569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.949 [2024-07-16 00:25:05.937597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.949 [2024-07-16 00:25:05.937626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.949 [2024-07-16 00:25:05.937654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.949 [2024-07-16 00:25:05.937682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.949 [2024-07-16 00:25:05.937710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.949 [2024-07-16 00:25:05.937759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.949 [2024-07-16 00:25:05.937771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:21:20.949 [2024-07-16 00:25:05.937784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937854] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ec4d40 was disconnected and freed. reset controller. 00:21:20.949 [2024-07-16 00:25:05.937873] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:20.949 [2024-07-16 00:25:05.937916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.949 [2024-07-16 00:25:05.937934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.949 [2024-07-16 00:25:05.937962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.937975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.949 [2024-07-16 00:25:05.937987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.938001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.949 [2024-07-16 00:25:05.938013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.949 [2024-07-16 00:25:05.938026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.949 [2024-07-16 00:25:05.938083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfa2e0 (9): Bad file descriptor 00:21:20.949 [2024-07-16 00:25:05.941295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:20.949 [2024-07-16 00:25:06.011583] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.949 00:21:20.949 Latency(us) 00:21:20.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.949 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:20.949 Verification LBA range: start 0x0 length 0x4000 00:21:20.949 NVMe0n1 : 15.01 8629.74 33.71 743.87 0.00 13627.91 801.00 15631.55 00:21:20.949 =================================================================================================================== 00:21:20.949 Total : 8629.74 33.71 743.87 0.00 13627.91 801.00 15631.55 00:21:20.949 Received shutdown signal, test time was about 15.000000 seconds 00:21:20.949 00:21:20.949 Latency(us) 00:21:20.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.949 =================================================================================================================== 00:21:20.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2349877 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2349877 /var/tmp/bdevperf.sock 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2349877 ']' 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:20.949 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.206 [2024-07-16 00:25:12.570101] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.206 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:21.206 [2024-07-16 00:25:12.810777] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:21.462 00:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.719 NVMe0n1 00:21:21.719 00:25:13 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.976 00:21:21.976 00:25:13 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.568 00:21:22.568 00:25:13 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.568 00:25:13 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:22.569 00:25:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.824 00:25:14 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:26.099 00:25:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.099 00:25:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:26.099 00:25:17 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2350550 00:21:26.099 00:25:17 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.099 00:25:17 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2350550 00:21:27.465 0 00:21:27.465 00:25:18 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:27.465 [2024-07-16 00:25:12.051569] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:21:27.465 [2024-07-16 00:25:12.051651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349877 ] 00:21:27.465 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.465 [2024-07-16 00:25:12.110736] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.465 [2024-07-16 00:25:12.216126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.465 [2024-07-16 00:25:14.355644] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:27.465 [2024-07-16 00:25:14.355732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.465 [2024-07-16 00:25:14.355755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.465 [2024-07-16 00:25:14.355771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.465 [2024-07-16 00:25:14.355784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.466 [2024-07-16 00:25:14.355798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.466 [2024-07-16 00:25:14.355811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.466 [2024-07-16 00:25:14.355825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.466 [2024-07-16 00:25:14.355838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.466 [2024-07-16 00:25:14.355851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.466 [2024-07-16 00:25:14.355905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.466 [2024-07-16 00:25:14.355938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7f2e0 (9): Bad file descriptor 00:21:27.466 [2024-07-16 00:25:14.370229] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:27.466 Running I/O for 1 seconds... 00:21:27.466 00:21:27.466 Latency(us) 00:21:27.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.466 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:27.466 Verification LBA range: start 0x0 length 0x4000 00:21:27.466 NVMe0n1 : 1.01 8608.39 33.63 0.00 0.00 14811.00 3155.44 19320.98 00:21:27.466 =================================================================================================================== 00:21:27.466 Total : 8608.39 33.63 0.00 0.00 14811.00 3155.44 19320.98 00:21:27.466 00:25:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.466 00:25:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:27.466 00:25:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.722 00:25:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.722 00:25:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:27.978 00:25:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.234 00:25:19 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:31.505 00:25:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.505 00:25:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:31.505 00:25:22 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2349877 00:21:31.505 00:25:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2349877 ']' 00:21:31.505 00:25:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2349877 00:21:31.505 00:25:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:31.505 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.505 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2349877 00:21:31.505 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.505 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.505 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2349877' 00:21:31.505 killing process with pid 2349877 00:21:31.505 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2349877 00:21:31.505 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2349877 00:21:31.763 00:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:31.763 00:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.021 rmmod nvme_tcp 00:21:32.021 rmmod nvme_fabrics 00:21:32.021 rmmod nvme_keyring 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2347713 ']' 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2347713 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2347713 ']' 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2347713 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2347713 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2347713' 00:21:32.021 killing process with pid 2347713 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2347713 00:21:32.021 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2347713 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.586 00:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.487 00:25:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:34.487 00:21:34.487 real 0m34.773s 00:21:34.487 user 2m2.047s 00:21:34.487 sys 0m5.887s 00:21:34.487 00:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.487 00:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:34.487 ************************************ 00:21:34.487 END TEST nvmf_failover 00:21:34.487 ************************************ 00:21:34.487 00:25:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:34.487 00:25:25 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:34.487 00:25:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:34.487 00:25:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.487 00:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.487 ************************************ 00:21:34.487 START TEST nvmf_host_discovery 00:21:34.487 ************************************ 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:34.487 * Looking for test storage... 00:21:34.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.487 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.488 00:25:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:36.385 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:36.385 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.385 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:36.386 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:36.386 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.386 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:36.643 00:25:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.643 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:21:36.644 00:21:36.644 --- 10.0.0.2 ping statistics --- 00:21:36.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.644 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:21:36.644 00:21:36.644 --- 10.0.0.1 ping statistics --- 00:21:36.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.644 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2353145 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2353145 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2353145 ']' 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.644 00:25:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.644 [2024-07-16 00:25:28.131601] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:21:36.644 [2024-07-16 00:25:28.131679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.644 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.644 [2024-07-16 00:25:28.201155] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.902 [2024-07-16 00:25:28.318411] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.902 [2024-07-16 00:25:28.318483] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.902 [2024-07-16 00:25:28.318499] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.902 [2024-07-16 00:25:28.318513] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.902 [2024-07-16 00:25:28.318524] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.902 [2024-07-16 00:25:28.318556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 [2024-07-16 00:25:29.108213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 [2024-07-16 00:25:29.116378] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 null0 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 null1 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2353296 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2353296 /tmp/host.sock 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2353296 ']' 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:37.835 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.835 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 [2024-07-16 00:25:29.189417] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:21:37.835 [2024-07-16 00:25:29.189496] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353296 ] 00:21:37.835 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.835 [2024-07-16 00:25:29.250776] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.835 [2024-07-16 00:25:29.366806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:38.094 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:38.095 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.353 [2024-07-16 00:25:29.770148] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:38.353 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:38.354 00:25:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:38.919 [2024-07-16 00:25:30.506724] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:38.919 [2024-07-16 00:25:30.506764] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:38.919 [2024-07-16 00:25:30.506791] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:39.177 [2024-07-16 00:25:30.594143] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:39.438 [2024-07-16 00:25:30.819764] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:39.438 [2024-07-16 00:25:30.819793] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.438 00:25:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:39.438 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:39.698 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.699 [2024-07-16 00:25:31.218300] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:39.699 [2024-07-16 00:25:31.218867] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:39.699 [2024-07-16 00:25:31.218936] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:39.699 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.019 [2024-07-16 00:25:31.346762] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:40.019 00:25:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:40.019 [2024-07-16 00:25:31.410586] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:40.019 [2024-07-16 00:25:31.410611] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:40.020 [2024-07-16 00:25:31.410622] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:40.960 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.961 [2024-07-16 00:25:32.434641] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:40.961 [2024-07-16 00:25:32.434688] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:40.961 [2024-07-16 00:25:32.440719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.961 [2024-07-16 00:25:32.440765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.961 [2024-07-16 00:25:32.440785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.961 [2024-07-16 00:25:32.440800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.961 [2024-07-16 00:25:32.440817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.961 [2024-07-16 00:25:32.440833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.961 [2024-07-16 00:25:32.440849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.961 [2024-07-16 00:25:32.440864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.961 [2024-07-16 00:25:32.440896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:40.961 [2024-07-16 00:25:32.450723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.961 [2024-07-16 00:25:32.460768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.961 [2024-07-16 00:25:32.461063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.961 [2024-07-16 00:25:32.461093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.961 [2024-07-16 00:25:32.461110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 [2024-07-16 00:25:32.461134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 [2024-07-16 00:25:32.461156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.961 [2024-07-16 00:25:32.461170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.961 [2024-07-16 00:25:32.461202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.961 [2024-07-16 00:25:32.461222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.961 [2024-07-16 00:25:32.470850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.961 [2024-07-16 00:25:32.471122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.961 [2024-07-16 00:25:32.471150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.961 [2024-07-16 00:25:32.471167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 [2024-07-16 00:25:32.471195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 [2024-07-16 00:25:32.471218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.961 [2024-07-16 00:25:32.471232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.961 [2024-07-16 00:25:32.471246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.961 [2024-07-16 00:25:32.471264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:40.961 [2024-07-16 00:25:32.480944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.961 [2024-07-16 00:25:32.481133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.961 [2024-07-16 00:25:32.481162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.961 [2024-07-16 00:25:32.481182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 [2024-07-16 00:25:32.481204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 [2024-07-16 00:25:32.481225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.961 [2024-07-16 00:25:32.481239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.961 [2024-07-16 00:25:32.481254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.961 [2024-07-16 00:25:32.481274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:40.961 [2024-07-16 00:25:32.491022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.961 [2024-07-16 00:25:32.491277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.961 [2024-07-16 00:25:32.491305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.961 [2024-07-16 00:25:32.491321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 [2024-07-16 00:25:32.491342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 [2024-07-16 00:25:32.491377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.961 [2024-07-16 00:25:32.491416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.961 [2024-07-16 00:25:32.491433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.961 [2024-07-16 00:25:32.491455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.961 [2024-07-16 00:25:32.501110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.961 [2024-07-16 00:25:32.501376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.961 [2024-07-16 00:25:32.501403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.961 [2024-07-16 00:25:32.501418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 [2024-07-16 00:25:32.501440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 [2024-07-16 00:25:32.501472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.961 [2024-07-16 00:25:32.501490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.961 [2024-07-16 00:25:32.501519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.961 [2024-07-16 00:25:32.501542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.961 [2024-07-16 00:25:32.511183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.961 [2024-07-16 00:25:32.511411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.961 [2024-07-16 00:25:32.511439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.961 [2024-07-16 00:25:32.511455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 [2024-07-16 00:25:32.511476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 [2024-07-16 00:25:32.511511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.961 [2024-07-16 00:25:32.511529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.961 [2024-07-16 00:25:32.511542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.961 [2024-07-16 00:25:32.511591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.961 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.961 [2024-07-16 00:25:32.521260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.961 [2024-07-16 00:25:32.521516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.961 [2024-07-16 00:25:32.521544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.961 [2024-07-16 00:25:32.521560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.961 [2024-07-16 00:25:32.521582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.961 [2024-07-16 00:25:32.521629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.961 [2024-07-16 00:25:32.521649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.962 [2024-07-16 00:25:32.521662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.962 [2024-07-16 00:25:32.521681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:40.962 [2024-07-16 00:25:32.531341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.962 [2024-07-16 00:25:32.531652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.962 [2024-07-16 00:25:32.531679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.962 [2024-07-16 00:25:32.531695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.962 [2024-07-16 00:25:32.531717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.962 [2024-07-16 00:25:32.531950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.962 [2024-07-16 00:25:32.531972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.962 [2024-07-16 00:25:32.531986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.962 [2024-07-16 00:25:32.532006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.962 [2024-07-16 00:25:32.541419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.962 [2024-07-16 00:25:32.541669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.962 [2024-07-16 00:25:32.541699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.962 [2024-07-16 00:25:32.541717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.962 [2024-07-16 00:25:32.541741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.962 [2024-07-16 00:25:32.541763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.962 [2024-07-16 00:25:32.541778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.962 [2024-07-16 00:25:32.541793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.962 [2024-07-16 00:25:32.541814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.962 [2024-07-16 00:25:32.551497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.962 [2024-07-16 00:25:32.551743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.962 [2024-07-16 00:25:32.551773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.962 [2024-07-16 00:25:32.551791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.962 [2024-07-16 00:25:32.551815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.962 [2024-07-16 00:25:32.551852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.962 [2024-07-16 00:25:32.551871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.962 [2024-07-16 00:25:32.551895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.962 [2024-07-16 00:25:32.551943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.962 [2024-07-16 00:25:32.561574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.962 [2024-07-16 00:25:32.561816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.962 [2024-07-16 00:25:32.561845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173ae20 with addr=10.0.0.2, port=4420 00:21:40.962 [2024-07-16 00:25:32.561863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173ae20 is same with the state(5) to be set 00:21:40.962 [2024-07-16 00:25:32.561895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173ae20 (9): Bad file descriptor 00:21:40.962 [2024-07-16 00:25:32.561936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:40.962 [2024-07-16 00:25:32.561949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:40.962 [2024-07-16 00:25:32.561962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:40.962 [2024-07-16 00:25:32.562015] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:40.962 [2024-07-16 00:25:32.562044] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:40.962 [2024-07-16 00:25:32.562067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:40.962 00:25:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:42.342 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.343 00:25:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 [2024-07-16 00:25:34.808173] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:43.283 [2024-07-16 00:25:34.808213] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:43.283 [2024-07-16 00:25:34.808242] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:43.283 [2024-07-16 00:25:34.894508] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:43.542 [2024-07-16 00:25:34.961997] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:43.542 [2024-07-16 00:25:34.962046] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.543 request: 00:21:43.543 { 00:21:43.543 "name": "nvme", 00:21:43.543 "trtype": "tcp", 00:21:43.543 "traddr": "10.0.0.2", 00:21:43.543 "adrfam": "ipv4", 00:21:43.543 "trsvcid": "8009", 00:21:43.543 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:43.543 "wait_for_attach": true, 00:21:43.543 "method": "bdev_nvme_start_discovery", 00:21:43.543 "req_id": 1 00:21:43.543 } 00:21:43.543 Got JSON-RPC error response 00:21:43.543 response: 00:21:43.543 { 00:21:43.543 "code": -17, 00:21:43.543 "message": "File exists" 00:21:43.543 } 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:43.543 00:25:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.543 request: 00:21:43.543 { 00:21:43.543 "name": "nvme_second", 00:21:43.543 "trtype": "tcp", 00:21:43.543 "traddr": "10.0.0.2", 00:21:43.543 "adrfam": "ipv4", 00:21:43.543 "trsvcid": "8009", 00:21:43.543 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:43.543 "wait_for_attach": true, 00:21:43.543 "method": "bdev_nvme_start_discovery", 00:21:43.543 "req_id": 1 00:21:43.543 } 00:21:43.543 Got JSON-RPC error response 00:21:43.543 response: 00:21:43.543 { 00:21:43.543 "code": -17, 00:21:43.543 "message": "File exists" 00:21:43.543 } 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.543 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.801 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:43.801 00:25:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.802 00:25:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.734 [2024-07-16 00:25:36.181549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.734 [2024-07-16 00:25:36.181595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1736e70 with addr=10.0.0.2, port=8010 00:21:44.734 [2024-07-16 00:25:36.181622] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:44.734 [2024-07-16 00:25:36.181637] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:44.734 [2024-07-16 00:25:36.181651] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:45.674 [2024-07-16 00:25:37.183984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.674 [2024-07-16 00:25:37.184023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1736e70 with addr=10.0.0.2, port=8010 00:21:45.674 [2024-07-16 00:25:37.184045] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:45.674 [2024-07-16 00:25:37.184059] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:45.674 [2024-07-16 00:25:37.184071] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:46.612 [2024-07-16 00:25:38.186182] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:46.612 request: 00:21:46.612 { 00:21:46.612 "name": "nvme_second", 00:21:46.612 "trtype": "tcp", 00:21:46.612 "traddr": "10.0.0.2", 00:21:46.612 "adrfam": "ipv4", 00:21:46.612 "trsvcid": "8010", 00:21:46.612 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:46.612 "wait_for_attach": false, 00:21:46.612 "attach_timeout_ms": 3000, 00:21:46.612 "method": "bdev_nvme_start_discovery", 00:21:46.612 "req_id": 1 00:21:46.612 } 00:21:46.612 Got JSON-RPC error response 00:21:46.612 response: 00:21:46.612 { 00:21:46.612 "code": -110, 00:21:46.612 "message": "Connection timed out" 00:21:46.612 } 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:46.612 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2353296 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.873 rmmod nvme_tcp 00:21:46.873 rmmod nvme_fabrics 00:21:46.873 rmmod nvme_keyring 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2353145 ']' 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2353145 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2353145 ']' 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2353145 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353145 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353145' 00:21:46.873 killing process with pid 2353145 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2353145 00:21:46.873 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2353145 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.133 00:25:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.668 00:21:49.668 real 0m14.656s 00:21:49.668 user 0m21.820s 00:21:49.668 sys 0m2.759s 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.668 ************************************ 00:21:49.668 END TEST nvmf_host_discovery 00:21:49.668 ************************************ 00:21:49.668 00:25:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:49.668 00:25:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:49.668 00:25:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:49.668 00:25:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.668 00:25:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.668 ************************************ 00:21:49.668 START TEST nvmf_host_multipath_status 00:21:49.668 ************************************ 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:49.668 * Looking for test storage... 00:21:49.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.668 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:49.669 00:25:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:51.577 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:51.577 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:51.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:51.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:21:51.577 00:21:51.577 --- 10.0.0.2 ping statistics --- 00:21:51.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.577 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:21:51.577 00:21:51.577 --- 10.0.0.1 ping statistics --- 00:21:51.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.577 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2356465 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2356465 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2356465 ']' 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.577 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.578 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.578 00:25:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:51.578 [2024-07-16 00:25:42.998272] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:21:51.578 [2024-07-16 00:25:42.998344] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.578 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.578 [2024-07-16 00:25:43.059485] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:51.578 [2024-07-16 00:25:43.162559] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.578 [2024-07-16 00:25:43.162615] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.578 [2024-07-16 00:25:43.162646] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.578 [2024-07-16 00:25:43.162658] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.578 [2024-07-16 00:25:43.162668] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.578 [2024-07-16 00:25:43.162732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.578 [2024-07-16 00:25:43.162736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2356465 00:21:51.835 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:52.091 [2024-07-16 00:25:43.515378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.091 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:52.348 Malloc0 00:21:52.348 00:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:52.604 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.861 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.119 [2024-07-16 00:25:44.540940] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.119 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:53.377 [2024-07-16 00:25:44.781609] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2356739 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2356739 /var/tmp/bdevperf.sock 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2356739 ']' 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.377 00:25:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:53.634 00:25:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.634 00:25:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:53.634 00:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:53.893 00:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:54.152 Nvme0n1 00:21:54.411 00:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:54.668 Nvme0n1 00:21:54.668 00:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:54.668 00:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:57.229 00:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:57.229 00:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:57.229 00:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:57.229 00:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:58.167 00:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:58.167 00:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:58.167 00:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.167 00:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.732 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:58.991 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.991 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:58.991 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.991 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:59.249 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.249 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:59.249 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.249 00:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:59.506 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.506 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:59.506 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.506 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:59.763 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.763 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:59.763 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:00.021 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:00.279 00:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:01.215 00:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:01.215 00:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:01.215 00:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.215 00:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:01.473 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:01.473 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:01.473 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.473 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:01.731 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.731 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:01.731 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.731 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.989 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.989 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.989 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.989 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:02.246 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.246 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:02.246 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.246 00:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:02.504 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.504 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:02.504 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.504 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:02.762 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.762 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:02.762 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:03.025 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:03.287 00:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:04.222 00:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:04.222 00:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:04.222 00:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.222 00:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:04.480 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.480 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:04.480 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.480 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.738 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.738 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.738 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.738 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:04.996 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.996 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:04.996 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.997 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:05.254 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.254 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:05.254 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.254 00:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:05.512 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.512 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:05.512 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.512 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:05.770 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.770 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:05.770 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:06.028 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:06.287 00:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:07.662 00:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:07.662 00:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:07.662 00:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.662 00:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:07.662 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.662 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:07.662 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.662 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:07.919 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.919 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:07.919 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.919 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.202 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.202 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.202 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.202 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.459 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.459 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.459 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.459 00:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.716 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.716 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:08.716 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.716 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.982 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.982 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:08.982 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:09.240 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:09.497 00:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:10.459 00:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:10.459 00:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:10.459 00:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.459 00:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.716 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.716 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:10.716 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.716 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.974 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.974 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.974 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.974 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.231 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.231 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.231 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.231 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.489 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.489 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:11.489 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.489 00:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.747 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.747 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:11.747 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.747 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:12.005 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.005 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:12.005 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:12.261 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:12.518 00:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:13.450 00:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:13.450 00:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:13.450 00:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.450 00:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:13.707 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.707 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:13.707 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.707 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.965 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.965 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.965 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.965 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:14.223 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.223 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:14.223 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.223 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:14.480 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.480 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:14.480 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.480 00:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:14.737 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.737 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:14.737 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.737 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.994 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.994 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:15.251 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:15.251 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:15.509 00:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:15.766 00:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:16.698 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:16.698 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:16.698 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.698 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.956 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.956 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:16.956 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.956 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.214 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.214 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.214 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.214 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.471 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.471 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:17.471 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.471 00:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.728 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.728 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:17.728 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.728 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.985 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.985 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:17.985 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.985 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.243 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.243 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:18.243 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:18.501 00:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:18.759 00:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:19.690 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:19.690 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:19.690 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.690 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.948 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.948 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:19.948 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.948 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:20.205 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.205 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:20.205 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.205 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.464 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.464 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.464 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.464 00:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.722 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.722 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:20.722 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.722 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.979 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.979 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.979 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.979 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:21.237 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.237 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:21.237 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:21.495 00:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:21.754 00:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:22.688 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:22.688 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:22.688 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.688 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.947 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.947 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:22.947 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.947 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.242 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.242 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.242 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.242 00:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:23.500 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.500 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:23.500 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.500 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.757 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.757 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.757 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.757 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.015 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.015 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:24.016 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.016 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.272 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.272 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:24.272 00:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:24.529 00:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:24.786 00:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:25.718 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:25.718 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:25.718 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.718 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.976 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.976 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:25.976 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.976 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:26.233 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.233 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:26.233 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.233 00:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:26.488 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.488 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:26.488 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.488 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.744 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.744 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.744 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.744 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:27.000 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.000 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:27.000 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.000 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2356739 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2356739 ']' 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2356739 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356739 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356739' 00:22:27.256 killing process with pid 2356739 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2356739 00:22:27.256 00:26:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2356739 00:22:27.516 Connection closed with partial response: 00:22:27.517 00:22:27.517 00:22:27.517 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2356739 00:22:27.517 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:27.517 [2024-07-16 00:25:44.839642] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:22:27.517 [2024-07-16 00:25:44.839729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356739 ] 00:22:27.517 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.517 [2024-07-16 00:25:44.899026] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.517 [2024-07-16 00:25:45.008697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.517 Running I/O for 90 seconds... 00:22:27.517 [2024-07-16 00:26:00.663414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.663551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.663596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.663635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.663687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.663724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.663760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.663796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.663812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.665132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.517 [2024-07-16 00:26:00.665812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.665970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.665987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:27.517 [2024-07-16 00:26:00.666291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.517 [2024-07-16 00:26:00.666305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.666981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.666997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.518 [2024-07-16 00:26:00.667905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:27.518 [2024-07-16 00:26:00.667932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.667948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.667972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.667988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.519 [2024-07-16 00:26:00.668235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.519 [2024-07-16 00:26:00.668272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.519 [2024-07-16 00:26:00.668309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.519 [2024-07-16 00:26:00.668352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.519 [2024-07-16 00:26:00.668390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.519 [2024-07-16 00:26:00.668589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.519 [2024-07-16 00:26:00.668638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.668989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:27.519 [2024-07-16 00:26:00.669813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.519 [2024-07-16 00:26:00.669828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.669855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.669871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.669922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.669939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.669966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.669983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:00.670463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:00.670479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.274618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.520 [2024-07-16 00:26:16.274670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.274770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.274793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.275483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.275514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:27.520 [2024-07-16 00:26:16.278616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.520 [2024-07-16 00:26:16.278631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.278945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.278961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.279933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.279959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.279986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:27.521 [2024-07-16 00:26:16.280332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.521 [2024-07-16 00:26:16.280347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:27.521 Received shutdown signal, test time was about 32.398812 seconds 00:22:27.521 00:22:27.521 Latency(us) 00:22:27.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.521 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.521 Verification LBA range: start 0x0 length 0x4000 00:22:27.521 Nvme0n1 : 32.40 7980.49 31.17 0.00 0.00 16011.50 433.87 4026531.84 00:22:27.521 =================================================================================================================== 00:22:27.521 Total : 7980.49 31.17 0.00 0.00 16011.50 433.87 4026531.84 00:22:27.521 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.090 rmmod nvme_tcp 00:22:28.090 rmmod nvme_fabrics 00:22:28.090 rmmod nvme_keyring 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2356465 ']' 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2356465 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2356465 ']' 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2356465 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356465 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356465' 00:22:28.090 killing process with pid 2356465 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2356465 00:22:28.090 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2356465 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.348 00:26:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.245 00:26:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.245 00:22:30.245 real 0m41.084s 00:22:30.245 user 2m3.901s 00:22:30.245 sys 0m10.427s 00:22:30.245 00:26:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.245 00:26:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:30.245 ************************************ 00:22:30.245 END TEST nvmf_host_multipath_status 00:22:30.245 ************************************ 00:22:30.245 00:26:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:30.245 00:26:21 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:30.245 00:26:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:30.245 00:26:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.245 00:26:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:30.245 ************************************ 00:22:30.245 START TEST nvmf_discovery_remove_ifc 00:22:30.245 ************************************ 00:22:30.245 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:30.503 * Looking for test storage... 00:22:30.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.503 00:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:32.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:32.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:32.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:32.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:22:32.415 00:22:32.415 --- 10.0.0.2 ping statistics --- 00:22:32.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.415 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:22:32.415 00:22:32.415 --- 10.0.0.1 ping statistics --- 00:22:32.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.415 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2362943 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2362943 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2362943 ']' 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.415 00:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.415 [2024-07-16 00:26:24.014604] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:22:32.415 [2024-07-16 00:26:24.014686] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.673 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.673 [2024-07-16 00:26:24.081529] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.673 [2024-07-16 00:26:24.196738] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.673 [2024-07-16 00:26:24.196803] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.673 [2024-07-16 00:26:24.196818] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.673 [2024-07-16 00:26:24.196831] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.673 [2024-07-16 00:26:24.196843] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.673 [2024-07-16 00:26:24.196874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.602 00:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.602 [2024-07-16 00:26:24.985193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.602 [2024-07-16 00:26:24.993352] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:33.602 null0 00:22:33.602 [2024-07-16 00:26:25.025316] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2363094 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2363094 /tmp/host.sock 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2363094 ']' 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:33.602 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.602 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.602 [2024-07-16 00:26:25.089139] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:22:33.602 [2024-07-16 00:26:25.089227] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363094 ] 00:22:33.602 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.602 [2024-07-16 00:26:25.149716] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.860 [2024-07-16 00:26:25.266604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.860 00:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.233 [2024-07-16 00:26:26.465095] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:35.233 [2024-07-16 00:26:26.465130] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:35.233 [2024-07-16 00:26:26.465156] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:35.233 [2024-07-16 00:26:26.551466] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:35.233 [2024-07-16 00:26:26.734853] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:35.233 [2024-07-16 00:26:26.734951] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:35.233 [2024-07-16 00:26:26.734990] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:35.233 [2024-07-16 00:26:26.735015] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:35.233 [2024-07-16 00:26:26.735038] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.233 [2024-07-16 00:26:26.741887] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b129d0 was disconnected and freed. delete nvme_qpair. 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.233 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.491 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:35.491 00:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.428 00:26:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.363 00:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.769 00:26:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.769 00:26:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.769 00:26:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:39.704 00:26:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.644 00:26:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.644 [2024-07-16 00:26:32.176085] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:40.644 [2024-07-16 00:26:32.176148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.644 [2024-07-16 00:26:32.176170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.644 [2024-07-16 00:26:32.176187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.644 [2024-07-16 00:26:32.176200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.644 [2024-07-16 00:26:32.176214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.644 [2024-07-16 00:26:32.176227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.644 [2024-07-16 00:26:32.176240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.644 [2024-07-16 00:26:32.176253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.644 [2024-07-16 00:26:32.176266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.644 [2024-07-16 00:26:32.176279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.644 [2024-07-16 00:26:32.176293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad94e0 is same with the state(5) to be set 00:22:40.644 [2024-07-16 00:26:32.186104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad94e0 (9): Bad file descriptor 00:22:40.644 [2024-07-16 00:26:32.196165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:41.581 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.581 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.581 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.581 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.581 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.581 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.581 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.839 [2024-07-16 00:26:33.234935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:41.839 [2024-07-16 00:26:33.235005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad94e0 with addr=10.0.0.2, port=4420 00:22:41.839 [2024-07-16 00:26:33.235035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad94e0 is same with the state(5) to be set 00:22:41.839 [2024-07-16 00:26:33.235088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad94e0 (9): Bad file descriptor 00:22:41.839 [2024-07-16 00:26:33.235576] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:41.839 [2024-07-16 00:26:33.235613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:41.839 [2024-07-16 00:26:33.235632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:41.839 [2024-07-16 00:26:33.235651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:41.839 [2024-07-16 00:26:33.235686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.839 [2024-07-16 00:26:33.235708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:41.839 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.839 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.839 00:26:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.774 [2024-07-16 00:26:34.238211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.774 [2024-07-16 00:26:34.238237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.774 [2024-07-16 00:26:34.238265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.774 [2024-07-16 00:26:34.238279] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:42.774 [2024-07-16 00:26:34.238298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.774 [2024-07-16 00:26:34.238339] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:42.774 [2024-07-16 00:26:34.238393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.774 [2024-07-16 00:26:34.238416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.774 [2024-07-16 00:26:34.238435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.774 [2024-07-16 00:26:34.238449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.774 [2024-07-16 00:26:34.238473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.774 [2024-07-16 00:26:34.238487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.774 [2024-07-16 00:26:34.238501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.774 [2024-07-16 00:26:34.238514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.774 [2024-07-16 00:26:34.238528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.775 [2024-07-16 00:26:34.238542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.775 [2024-07-16 00:26:34.238555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:42.775 [2024-07-16 00:26:34.238666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad8960 (9): Bad file descriptor 00:22:42.775 [2024-07-16 00:26:34.239689] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:42.775 [2024-07-16 00:26:34.239711] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:42.775 00:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:44.155 00:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.722 [2024-07-16 00:26:36.290102] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:44.722 [2024-07-16 00:26:36.290132] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:44.722 [2024-07-16 00:26:36.290156] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:44.982 [2024-07-16 00:26:36.417591] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:44.982 00:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:45.241 [2024-07-16 00:26:36.643372] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:45.241 [2024-07-16 00:26:36.643447] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:45.241 [2024-07-16 00:26:36.643479] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:45.241 [2024-07-16 00:26:36.643503] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:45.241 [2024-07-16 00:26:36.643517] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.241 [2024-07-16 00:26:36.648228] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b1c2a0 was disconnected and freed. delete nvme_qpair. 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2363094 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2363094 ']' 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2363094 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363094 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363094' 00:22:46.178 killing process with pid 2363094 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2363094 00:22:46.178 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2363094 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.438 rmmod nvme_tcp 00:22:46.438 rmmod nvme_fabrics 00:22:46.438 rmmod nvme_keyring 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2362943 ']' 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2362943 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2362943 ']' 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2362943 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2362943 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2362943' 00:22:46.438 killing process with pid 2362943 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2362943 00:22:46.438 00:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2362943 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.698 00:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.229 00:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.229 00:22:49.229 real 0m18.369s 00:22:49.229 user 0m26.710s 00:22:49.229 sys 0m2.934s 00:22:49.229 00:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:49.229 00:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.229 ************************************ 00:22:49.229 END TEST nvmf_discovery_remove_ifc 00:22:49.229 ************************************ 00:22:49.229 00:26:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:49.229 00:26:40 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:49.229 00:26:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:49.229 00:26:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.229 00:26:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.229 ************************************ 00:22:49.229 START TEST nvmf_identify_kernel_target 00:22:49.229 ************************************ 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:49.229 * Looking for test storage... 00:22:49.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.229 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.230 00:26:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.129 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.129 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.130 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.130 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.130 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:22:51.130 00:22:51.130 --- 10.0.0.2 ping statistics --- 00:22:51.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.130 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:51.130 00:22:51.130 --- 10.0.0.1 ping statistics --- 00:22:51.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.130 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:51.130 00:26:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:52.066 Waiting for block devices as requested 00:22:52.066 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:52.325 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:52.325 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:52.583 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:52.583 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:52.583 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:52.583 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:52.583 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:52.842 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:52.842 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:52.842 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:52.842 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:53.100 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:53.100 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:53.100 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:53.360 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:53.360 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:53.360 No valid GPT data, bailing 00:22:53.360 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:53.643 00:26:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:53.643 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:53.643 00:22:53.643 Discovery Log Number of Records 2, Generation counter 2 00:22:53.643 =====Discovery Log Entry 0====== 00:22:53.643 trtype: tcp 00:22:53.643 adrfam: ipv4 00:22:53.643 subtype: current discovery subsystem 00:22:53.643 treq: not specified, sq flow control disable supported 00:22:53.643 portid: 1 00:22:53.643 trsvcid: 4420 00:22:53.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:53.643 traddr: 10.0.0.1 00:22:53.643 eflags: none 00:22:53.643 sectype: none 00:22:53.643 =====Discovery Log Entry 1====== 00:22:53.643 trtype: tcp 00:22:53.643 adrfam: ipv4 00:22:53.643 subtype: nvme subsystem 00:22:53.643 treq: not specified, sq flow control disable supported 00:22:53.643 portid: 1 00:22:53.643 trsvcid: 4420 00:22:53.643 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:53.643 traddr: 10.0.0.1 00:22:53.643 eflags: none 00:22:53.643 sectype: none 00:22:53.643 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:53.643 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:53.643 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.643 ===================================================== 00:22:53.643 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:53.643 ===================================================== 00:22:53.643 Controller Capabilities/Features 00:22:53.643 ================================ 00:22:53.643 Vendor ID: 0000 00:22:53.643 Subsystem Vendor ID: 0000 00:22:53.643 Serial Number: 2653170071b5a4becdc3 00:22:53.643 Model Number: Linux 00:22:53.643 Firmware Version: 6.7.0-68 00:22:53.643 Recommended Arb Burst: 0 00:22:53.643 IEEE OUI Identifier: 00 00 00 00:22:53.643 Multi-path I/O 00:22:53.643 May have multiple subsystem ports: No 00:22:53.643 May have multiple controllers: No 00:22:53.643 Associated with SR-IOV VF: No 00:22:53.643 Max Data Transfer Size: Unlimited 00:22:53.643 Max Number of Namespaces: 0 00:22:53.643 Max Number of I/O Queues: 1024 00:22:53.643 NVMe Specification Version (VS): 1.3 00:22:53.643 NVMe Specification Version (Identify): 1.3 00:22:53.643 Maximum Queue Entries: 1024 00:22:53.643 Contiguous Queues Required: No 00:22:53.643 Arbitration Mechanisms Supported 00:22:53.643 Weighted Round Robin: Not Supported 00:22:53.643 Vendor Specific: Not Supported 00:22:53.643 Reset Timeout: 7500 ms 00:22:53.643 Doorbell Stride: 4 bytes 00:22:53.643 NVM Subsystem Reset: Not Supported 00:22:53.643 Command Sets Supported 00:22:53.644 NVM Command Set: Supported 00:22:53.644 Boot Partition: Not Supported 00:22:53.644 Memory Page Size Minimum: 4096 bytes 00:22:53.644 Memory Page Size Maximum: 4096 bytes 00:22:53.644 Persistent Memory Region: Not Supported 00:22:53.644 Optional Asynchronous Events Supported 00:22:53.644 Namespace Attribute Notices: Not Supported 00:22:53.644 Firmware Activation Notices: Not Supported 00:22:53.644 ANA Change Notices: Not Supported 00:22:53.644 PLE Aggregate Log Change Notices: Not Supported 00:22:53.644 LBA Status Info Alert Notices: Not Supported 00:22:53.644 EGE Aggregate Log Change Notices: Not Supported 00:22:53.644 Normal NVM Subsystem Shutdown event: Not Supported 00:22:53.644 Zone Descriptor Change Notices: Not Supported 00:22:53.644 Discovery Log Change Notices: Supported 00:22:53.644 Controller Attributes 00:22:53.644 128-bit Host Identifier: Not Supported 00:22:53.644 Non-Operational Permissive Mode: Not Supported 00:22:53.644 NVM Sets: Not Supported 00:22:53.644 Read Recovery Levels: Not Supported 00:22:53.644 Endurance Groups: Not Supported 00:22:53.644 Predictable Latency Mode: Not Supported 00:22:53.644 Traffic Based Keep ALive: Not Supported 00:22:53.644 Namespace Granularity: Not Supported 00:22:53.644 SQ Associations: Not Supported 00:22:53.644 UUID List: Not Supported 00:22:53.644 Multi-Domain Subsystem: Not Supported 00:22:53.644 Fixed Capacity Management: Not Supported 00:22:53.644 Variable Capacity Management: Not Supported 00:22:53.644 Delete Endurance Group: Not Supported 00:22:53.644 Delete NVM Set: Not Supported 00:22:53.644 Extended LBA Formats Supported: Not Supported 00:22:53.644 Flexible Data Placement Supported: Not Supported 00:22:53.644 00:22:53.644 Controller Memory Buffer Support 00:22:53.644 ================================ 00:22:53.644 Supported: No 00:22:53.644 00:22:53.644 Persistent Memory Region Support 00:22:53.644 ================================ 00:22:53.644 Supported: No 00:22:53.644 00:22:53.644 Admin Command Set Attributes 00:22:53.644 ============================ 00:22:53.644 Security Send/Receive: Not Supported 00:22:53.644 Format NVM: Not Supported 00:22:53.644 Firmware Activate/Download: Not Supported 00:22:53.644 Namespace Management: Not Supported 00:22:53.644 Device Self-Test: Not Supported 00:22:53.644 Directives: Not Supported 00:22:53.644 NVMe-MI: Not Supported 00:22:53.644 Virtualization Management: Not Supported 00:22:53.644 Doorbell Buffer Config: Not Supported 00:22:53.644 Get LBA Status Capability: Not Supported 00:22:53.644 Command & Feature Lockdown Capability: Not Supported 00:22:53.644 Abort Command Limit: 1 00:22:53.644 Async Event Request Limit: 1 00:22:53.644 Number of Firmware Slots: N/A 00:22:53.644 Firmware Slot 1 Read-Only: N/A 00:22:53.644 Firmware Activation Without Reset: N/A 00:22:53.644 Multiple Update Detection Support: N/A 00:22:53.644 Firmware Update Granularity: No Information Provided 00:22:53.644 Per-Namespace SMART Log: No 00:22:53.644 Asymmetric Namespace Access Log Page: Not Supported 00:22:53.644 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:53.644 Command Effects Log Page: Not Supported 00:22:53.644 Get Log Page Extended Data: Supported 00:22:53.644 Telemetry Log Pages: Not Supported 00:22:53.644 Persistent Event Log Pages: Not Supported 00:22:53.644 Supported Log Pages Log Page: May Support 00:22:53.644 Commands Supported & Effects Log Page: Not Supported 00:22:53.644 Feature Identifiers & Effects Log Page:May Support 00:22:53.644 NVMe-MI Commands & Effects Log Page: May Support 00:22:53.644 Data Area 4 for Telemetry Log: Not Supported 00:22:53.644 Error Log Page Entries Supported: 1 00:22:53.644 Keep Alive: Not Supported 00:22:53.644 00:22:53.644 NVM Command Set Attributes 00:22:53.644 ========================== 00:22:53.644 Submission Queue Entry Size 00:22:53.644 Max: 1 00:22:53.644 Min: 1 00:22:53.644 Completion Queue Entry Size 00:22:53.644 Max: 1 00:22:53.644 Min: 1 00:22:53.644 Number of Namespaces: 0 00:22:53.644 Compare Command: Not Supported 00:22:53.644 Write Uncorrectable Command: Not Supported 00:22:53.644 Dataset Management Command: Not Supported 00:22:53.644 Write Zeroes Command: Not Supported 00:22:53.644 Set Features Save Field: Not Supported 00:22:53.644 Reservations: Not Supported 00:22:53.644 Timestamp: Not Supported 00:22:53.644 Copy: Not Supported 00:22:53.644 Volatile Write Cache: Not Present 00:22:53.644 Atomic Write Unit (Normal): 1 00:22:53.644 Atomic Write Unit (PFail): 1 00:22:53.644 Atomic Compare & Write Unit: 1 00:22:53.644 Fused Compare & Write: Not Supported 00:22:53.644 Scatter-Gather List 00:22:53.644 SGL Command Set: Supported 00:22:53.644 SGL Keyed: Not Supported 00:22:53.644 SGL Bit Bucket Descriptor: Not Supported 00:22:53.644 SGL Metadata Pointer: Not Supported 00:22:53.644 Oversized SGL: Not Supported 00:22:53.644 SGL Metadata Address: Not Supported 00:22:53.644 SGL Offset: Supported 00:22:53.644 Transport SGL Data Block: Not Supported 00:22:53.644 Replay Protected Memory Block: Not Supported 00:22:53.644 00:22:53.644 Firmware Slot Information 00:22:53.644 ========================= 00:22:53.644 Active slot: 0 00:22:53.644 00:22:53.644 00:22:53.644 Error Log 00:22:53.644 ========= 00:22:53.644 00:22:53.644 Active Namespaces 00:22:53.644 ================= 00:22:53.644 Discovery Log Page 00:22:53.644 ================== 00:22:53.644 Generation Counter: 2 00:22:53.644 Number of Records: 2 00:22:53.644 Record Format: 0 00:22:53.644 00:22:53.644 Discovery Log Entry 0 00:22:53.644 ---------------------- 00:22:53.644 Transport Type: 3 (TCP) 00:22:53.644 Address Family: 1 (IPv4) 00:22:53.644 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:53.644 Entry Flags: 00:22:53.644 Duplicate Returned Information: 0 00:22:53.644 Explicit Persistent Connection Support for Discovery: 0 00:22:53.644 Transport Requirements: 00:22:53.644 Secure Channel: Not Specified 00:22:53.644 Port ID: 1 (0x0001) 00:22:53.644 Controller ID: 65535 (0xffff) 00:22:53.644 Admin Max SQ Size: 32 00:22:53.644 Transport Service Identifier: 4420 00:22:53.644 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:53.644 Transport Address: 10.0.0.1 00:22:53.644 Discovery Log Entry 1 00:22:53.644 ---------------------- 00:22:53.644 Transport Type: 3 (TCP) 00:22:53.644 Address Family: 1 (IPv4) 00:22:53.644 Subsystem Type: 2 (NVM Subsystem) 00:22:53.644 Entry Flags: 00:22:53.644 Duplicate Returned Information: 0 00:22:53.644 Explicit Persistent Connection Support for Discovery: 0 00:22:53.644 Transport Requirements: 00:22:53.644 Secure Channel: Not Specified 00:22:53.644 Port ID: 1 (0x0001) 00:22:53.644 Controller ID: 65535 (0xffff) 00:22:53.644 Admin Max SQ Size: 32 00:22:53.644 Transport Service Identifier: 4420 00:22:53.644 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:53.644 Transport Address: 10.0.0.1 00:22:53.644 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:53.644 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.904 get_feature(0x01) failed 00:22:53.904 get_feature(0x02) failed 00:22:53.904 get_feature(0x04) failed 00:22:53.904 ===================================================== 00:22:53.904 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:53.904 ===================================================== 00:22:53.904 Controller Capabilities/Features 00:22:53.904 ================================ 00:22:53.904 Vendor ID: 0000 00:22:53.904 Subsystem Vendor ID: 0000 00:22:53.904 Serial Number: a3994a5400a8e8873194 00:22:53.904 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:53.904 Firmware Version: 6.7.0-68 00:22:53.904 Recommended Arb Burst: 6 00:22:53.904 IEEE OUI Identifier: 00 00 00 00:22:53.904 Multi-path I/O 00:22:53.904 May have multiple subsystem ports: Yes 00:22:53.904 May have multiple controllers: Yes 00:22:53.904 Associated with SR-IOV VF: No 00:22:53.904 Max Data Transfer Size: Unlimited 00:22:53.904 Max Number of Namespaces: 1024 00:22:53.904 Max Number of I/O Queues: 128 00:22:53.904 NVMe Specification Version (VS): 1.3 00:22:53.904 NVMe Specification Version (Identify): 1.3 00:22:53.904 Maximum Queue Entries: 1024 00:22:53.904 Contiguous Queues Required: No 00:22:53.904 Arbitration Mechanisms Supported 00:22:53.904 Weighted Round Robin: Not Supported 00:22:53.904 Vendor Specific: Not Supported 00:22:53.904 Reset Timeout: 7500 ms 00:22:53.904 Doorbell Stride: 4 bytes 00:22:53.904 NVM Subsystem Reset: Not Supported 00:22:53.904 Command Sets Supported 00:22:53.904 NVM Command Set: Supported 00:22:53.904 Boot Partition: Not Supported 00:22:53.904 Memory Page Size Minimum: 4096 bytes 00:22:53.904 Memory Page Size Maximum: 4096 bytes 00:22:53.904 Persistent Memory Region: Not Supported 00:22:53.904 Optional Asynchronous Events Supported 00:22:53.904 Namespace Attribute Notices: Supported 00:22:53.904 Firmware Activation Notices: Not Supported 00:22:53.904 ANA Change Notices: Supported 00:22:53.904 PLE Aggregate Log Change Notices: Not Supported 00:22:53.904 LBA Status Info Alert Notices: Not Supported 00:22:53.904 EGE Aggregate Log Change Notices: Not Supported 00:22:53.904 Normal NVM Subsystem Shutdown event: Not Supported 00:22:53.904 Zone Descriptor Change Notices: Not Supported 00:22:53.904 Discovery Log Change Notices: Not Supported 00:22:53.904 Controller Attributes 00:22:53.904 128-bit Host Identifier: Supported 00:22:53.904 Non-Operational Permissive Mode: Not Supported 00:22:53.904 NVM Sets: Not Supported 00:22:53.904 Read Recovery Levels: Not Supported 00:22:53.904 Endurance Groups: Not Supported 00:22:53.904 Predictable Latency Mode: Not Supported 00:22:53.904 Traffic Based Keep ALive: Supported 00:22:53.904 Namespace Granularity: Not Supported 00:22:53.904 SQ Associations: Not Supported 00:22:53.904 UUID List: Not Supported 00:22:53.904 Multi-Domain Subsystem: Not Supported 00:22:53.905 Fixed Capacity Management: Not Supported 00:22:53.905 Variable Capacity Management: Not Supported 00:22:53.905 Delete Endurance Group: Not Supported 00:22:53.905 Delete NVM Set: Not Supported 00:22:53.905 Extended LBA Formats Supported: Not Supported 00:22:53.905 Flexible Data Placement Supported: Not Supported 00:22:53.905 00:22:53.905 Controller Memory Buffer Support 00:22:53.905 ================================ 00:22:53.905 Supported: No 00:22:53.905 00:22:53.905 Persistent Memory Region Support 00:22:53.905 ================================ 00:22:53.905 Supported: No 00:22:53.905 00:22:53.905 Admin Command Set Attributes 00:22:53.905 ============================ 00:22:53.905 Security Send/Receive: Not Supported 00:22:53.905 Format NVM: Not Supported 00:22:53.905 Firmware Activate/Download: Not Supported 00:22:53.905 Namespace Management: Not Supported 00:22:53.905 Device Self-Test: Not Supported 00:22:53.905 Directives: Not Supported 00:22:53.905 NVMe-MI: Not Supported 00:22:53.905 Virtualization Management: Not Supported 00:22:53.905 Doorbell Buffer Config: Not Supported 00:22:53.905 Get LBA Status Capability: Not Supported 00:22:53.905 Command & Feature Lockdown Capability: Not Supported 00:22:53.905 Abort Command Limit: 4 00:22:53.905 Async Event Request Limit: 4 00:22:53.905 Number of Firmware Slots: N/A 00:22:53.905 Firmware Slot 1 Read-Only: N/A 00:22:53.905 Firmware Activation Without Reset: N/A 00:22:53.905 Multiple Update Detection Support: N/A 00:22:53.905 Firmware Update Granularity: No Information Provided 00:22:53.905 Per-Namespace SMART Log: Yes 00:22:53.905 Asymmetric Namespace Access Log Page: Supported 00:22:53.905 ANA Transition Time : 10 sec 00:22:53.905 00:22:53.905 Asymmetric Namespace Access Capabilities 00:22:53.905 ANA Optimized State : Supported 00:22:53.905 ANA Non-Optimized State : Supported 00:22:53.905 ANA Inaccessible State : Supported 00:22:53.905 ANA Persistent Loss State : Supported 00:22:53.905 ANA Change State : Supported 00:22:53.905 ANAGRPID is not changed : No 00:22:53.905 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:53.905 00:22:53.905 ANA Group Identifier Maximum : 128 00:22:53.905 Number of ANA Group Identifiers : 128 00:22:53.905 Max Number of Allowed Namespaces : 1024 00:22:53.905 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:53.905 Command Effects Log Page: Supported 00:22:53.905 Get Log Page Extended Data: Supported 00:22:53.905 Telemetry Log Pages: Not Supported 00:22:53.905 Persistent Event Log Pages: Not Supported 00:22:53.905 Supported Log Pages Log Page: May Support 00:22:53.905 Commands Supported & Effects Log Page: Not Supported 00:22:53.905 Feature Identifiers & Effects Log Page:May Support 00:22:53.905 NVMe-MI Commands & Effects Log Page: May Support 00:22:53.905 Data Area 4 for Telemetry Log: Not Supported 00:22:53.905 Error Log Page Entries Supported: 128 00:22:53.905 Keep Alive: Supported 00:22:53.905 Keep Alive Granularity: 1000 ms 00:22:53.905 00:22:53.905 NVM Command Set Attributes 00:22:53.905 ========================== 00:22:53.905 Submission Queue Entry Size 00:22:53.905 Max: 64 00:22:53.905 Min: 64 00:22:53.905 Completion Queue Entry Size 00:22:53.905 Max: 16 00:22:53.905 Min: 16 00:22:53.905 Number of Namespaces: 1024 00:22:53.905 Compare Command: Not Supported 00:22:53.905 Write Uncorrectable Command: Not Supported 00:22:53.905 Dataset Management Command: Supported 00:22:53.905 Write Zeroes Command: Supported 00:22:53.905 Set Features Save Field: Not Supported 00:22:53.905 Reservations: Not Supported 00:22:53.905 Timestamp: Not Supported 00:22:53.905 Copy: Not Supported 00:22:53.905 Volatile Write Cache: Present 00:22:53.905 Atomic Write Unit (Normal): 1 00:22:53.905 Atomic Write Unit (PFail): 1 00:22:53.905 Atomic Compare & Write Unit: 1 00:22:53.905 Fused Compare & Write: Not Supported 00:22:53.905 Scatter-Gather List 00:22:53.905 SGL Command Set: Supported 00:22:53.905 SGL Keyed: Not Supported 00:22:53.905 SGL Bit Bucket Descriptor: Not Supported 00:22:53.905 SGL Metadata Pointer: Not Supported 00:22:53.905 Oversized SGL: Not Supported 00:22:53.905 SGL Metadata Address: Not Supported 00:22:53.905 SGL Offset: Supported 00:22:53.905 Transport SGL Data Block: Not Supported 00:22:53.905 Replay Protected Memory Block: Not Supported 00:22:53.905 00:22:53.905 Firmware Slot Information 00:22:53.905 ========================= 00:22:53.905 Active slot: 0 00:22:53.905 00:22:53.905 Asymmetric Namespace Access 00:22:53.905 =========================== 00:22:53.905 Change Count : 0 00:22:53.905 Number of ANA Group Descriptors : 1 00:22:53.905 ANA Group Descriptor : 0 00:22:53.905 ANA Group ID : 1 00:22:53.905 Number of NSID Values : 1 00:22:53.905 Change Count : 0 00:22:53.905 ANA State : 1 00:22:53.905 Namespace Identifier : 1 00:22:53.905 00:22:53.905 Commands Supported and Effects 00:22:53.905 ============================== 00:22:53.905 Admin Commands 00:22:53.905 -------------- 00:22:53.905 Get Log Page (02h): Supported 00:22:53.905 Identify (06h): Supported 00:22:53.905 Abort (08h): Supported 00:22:53.905 Set Features (09h): Supported 00:22:53.905 Get Features (0Ah): Supported 00:22:53.905 Asynchronous Event Request (0Ch): Supported 00:22:53.905 Keep Alive (18h): Supported 00:22:53.905 I/O Commands 00:22:53.905 ------------ 00:22:53.905 Flush (00h): Supported 00:22:53.905 Write (01h): Supported LBA-Change 00:22:53.905 Read (02h): Supported 00:22:53.905 Write Zeroes (08h): Supported LBA-Change 00:22:53.905 Dataset Management (09h): Supported 00:22:53.905 00:22:53.905 Error Log 00:22:53.905 ========= 00:22:53.905 Entry: 0 00:22:53.905 Error Count: 0x3 00:22:53.905 Submission Queue Id: 0x0 00:22:53.905 Command Id: 0x5 00:22:53.905 Phase Bit: 0 00:22:53.905 Status Code: 0x2 00:22:53.905 Status Code Type: 0x0 00:22:53.905 Do Not Retry: 1 00:22:53.905 Error Location: 0x28 00:22:53.905 LBA: 0x0 00:22:53.905 Namespace: 0x0 00:22:53.905 Vendor Log Page: 0x0 00:22:53.905 ----------- 00:22:53.905 Entry: 1 00:22:53.905 Error Count: 0x2 00:22:53.905 Submission Queue Id: 0x0 00:22:53.905 Command Id: 0x5 00:22:53.905 Phase Bit: 0 00:22:53.905 Status Code: 0x2 00:22:53.905 Status Code Type: 0x0 00:22:53.905 Do Not Retry: 1 00:22:53.905 Error Location: 0x28 00:22:53.905 LBA: 0x0 00:22:53.905 Namespace: 0x0 00:22:53.905 Vendor Log Page: 0x0 00:22:53.905 ----------- 00:22:53.905 Entry: 2 00:22:53.905 Error Count: 0x1 00:22:53.905 Submission Queue Id: 0x0 00:22:53.905 Command Id: 0x4 00:22:53.905 Phase Bit: 0 00:22:53.905 Status Code: 0x2 00:22:53.905 Status Code Type: 0x0 00:22:53.905 Do Not Retry: 1 00:22:53.905 Error Location: 0x28 00:22:53.905 LBA: 0x0 00:22:53.905 Namespace: 0x0 00:22:53.905 Vendor Log Page: 0x0 00:22:53.905 00:22:53.905 Number of Queues 00:22:53.905 ================ 00:22:53.905 Number of I/O Submission Queues: 128 00:22:53.905 Number of I/O Completion Queues: 128 00:22:53.905 00:22:53.905 ZNS Specific Controller Data 00:22:53.905 ============================ 00:22:53.905 Zone Append Size Limit: 0 00:22:53.905 00:22:53.905 00:22:53.905 Active Namespaces 00:22:53.905 ================= 00:22:53.905 get_feature(0x05) failed 00:22:53.905 Namespace ID:1 00:22:53.905 Command Set Identifier: NVM (00h) 00:22:53.905 Deallocate: Supported 00:22:53.905 Deallocated/Unwritten Error: Not Supported 00:22:53.905 Deallocated Read Value: Unknown 00:22:53.905 Deallocate in Write Zeroes: Not Supported 00:22:53.905 Deallocated Guard Field: 0xFFFF 00:22:53.905 Flush: Supported 00:22:53.905 Reservation: Not Supported 00:22:53.905 Namespace Sharing Capabilities: Multiple Controllers 00:22:53.905 Size (in LBAs): 1953525168 (931GiB) 00:22:53.905 Capacity (in LBAs): 1953525168 (931GiB) 00:22:53.905 Utilization (in LBAs): 1953525168 (931GiB) 00:22:53.905 UUID: aa6a0816-35d0-4e85-8406-b689d33d9faa 00:22:53.905 Thin Provisioning: Not Supported 00:22:53.905 Per-NS Atomic Units: Yes 00:22:53.905 Atomic Boundary Size (Normal): 0 00:22:53.905 Atomic Boundary Size (PFail): 0 00:22:53.905 Atomic Boundary Offset: 0 00:22:53.905 NGUID/EUI64 Never Reused: No 00:22:53.905 ANA group ID: 1 00:22:53.905 Namespace Write Protected: No 00:22:53.905 Number of LBA Formats: 1 00:22:53.905 Current LBA Format: LBA Format #00 00:22:53.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:53.905 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.905 rmmod nvme_tcp 00:22:53.905 rmmod nvme_fabrics 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.905 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.906 00:26:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:55.814 00:26:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:57.186 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:57.186 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:57.186 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:57.186 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:57.186 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:57.186 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:57.186 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:57.186 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:57.186 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:58.123 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:58.123 00:22:58.123 real 0m9.418s 00:22:58.123 user 0m2.021s 00:22:58.123 sys 0m3.367s 00:22:58.123 00:26:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.123 00:26:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.123 ************************************ 00:22:58.123 END TEST nvmf_identify_kernel_target 00:22:58.123 ************************************ 00:22:58.123 00:26:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:58.123 00:26:49 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:58.123 00:26:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:58.123 00:26:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.123 00:26:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.123 ************************************ 00:22:58.123 START TEST nvmf_auth_host 00:22:58.123 ************************************ 00:22:58.123 00:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:58.382 * Looking for test storage... 00:22:58.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.382 00:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.281 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:00.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:00.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:00.282 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:00.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:23:00.282 00:23:00.282 --- 10.0.0.2 ping statistics --- 00:23:00.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.282 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:23:00.282 00:23:00.282 --- 10.0.0.1 ping statistics --- 00:23:00.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.282 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2370154 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2370154 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2370154 ']' 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.282 00:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d2303ee91034f62aa209b47dccf1cc5 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Hxe 00:23:01.656 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d2303ee91034f62aa209b47dccf1cc5 0 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d2303ee91034f62aa209b47dccf1cc5 0 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d2303ee91034f62aa209b47dccf1cc5 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Hxe 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Hxe 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Hxe 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=879a49f099a3f81d84866dee7d94fb94ce5862e2d74deb251fd28ed60f029963 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SR5 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 879a49f099a3f81d84866dee7d94fb94ce5862e2d74deb251fd28ed60f029963 3 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 879a49f099a3f81d84866dee7d94fb94ce5862e2d74deb251fd28ed60f029963 3 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=879a49f099a3f81d84866dee7d94fb94ce5862e2d74deb251fd28ed60f029963 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:01.657 00:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SR5 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SR5 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SR5 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45f0f3fb8f3220fc6cd998119b4f5018047a9f4a2a836e7c 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sP6 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45f0f3fb8f3220fc6cd998119b4f5018047a9f4a2a836e7c 0 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45f0f3fb8f3220fc6cd998119b4f5018047a9f4a2a836e7c 0 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45f0f3fb8f3220fc6cd998119b4f5018047a9f4a2a836e7c 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sP6 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sP6 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.sP6 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=622112091b23a5ca6a604e0d7f39877103eee4ba904c2a6c 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GiC 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 622112091b23a5ca6a604e0d7f39877103eee4ba904c2a6c 2 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 622112091b23a5ca6a604e0d7f39877103eee4ba904c2a6c 2 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=622112091b23a5ca6a604e0d7f39877103eee4ba904c2a6c 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GiC 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GiC 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.GiC 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ad04145db6b88a92a9231632de97635 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hyp 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ad04145db6b88a92a9231632de97635 1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ad04145db6b88a92a9231632de97635 1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ad04145db6b88a92a9231632de97635 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hyp 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hyp 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hyp 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e452345f0f9031bc042fdd2e03fbf78e 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HEB 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e452345f0f9031bc042fdd2e03fbf78e 1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e452345f0f9031bc042fdd2e03fbf78e 1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e452345f0f9031bc042fdd2e03fbf78e 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HEB 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HEB 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HEB 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d6c07391363418a7b23abc38d6fdfafbb1c549cfab88c17 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2BU 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d6c07391363418a7b23abc38d6fdfafbb1c549cfab88c17 2 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d6c07391363418a7b23abc38d6fdfafbb1c549cfab88c17 2 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d6c07391363418a7b23abc38d6fdfafbb1c549cfab88c17 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:01.657 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2BU 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2BU 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2BU 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=97aa523ad1b9b30ac0b0b68060ae5dd0 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KPE 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 97aa523ad1b9b30ac0b0b68060ae5dd0 0 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 97aa523ad1b9b30ac0b0b68060ae5dd0 0 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=97aa523ad1b9b30ac0b0b68060ae5dd0 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KPE 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KPE 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KPE 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=99215bda978c7fd923f9831d210ba2039b828f95dc8b700afd93c392fff6eccd 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.IoP 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 99215bda978c7fd923f9831d210ba2039b828f95dc8b700afd93c392fff6eccd 3 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 99215bda978c7fd923f9831d210ba2039b828f95dc8b700afd93c392fff6eccd 3 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=99215bda978c7fd923f9831d210ba2039b828f95dc8b700afd93c392fff6eccd 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.IoP 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.IoP 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.IoP 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2370154 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2370154 ']' 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.914 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Hxe 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SR5 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SR5 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sP6 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.GiC ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GiC 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hyp 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HEB ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HEB 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2BU 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KPE ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KPE 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.IoP 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:02.173 00:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:03.548 Waiting for block devices as requested 00:23:03.548 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:03.548 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:03.548 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:03.809 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:03.809 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:03.809 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:04.069 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:04.069 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:04.069 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:04.069 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:04.328 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:04.328 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:04.328 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:04.328 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:04.587 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:04.587 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:04.587 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:05.154 No valid GPT data, bailing 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:05.154 00:26:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:05.155 00:23:05.155 Discovery Log Number of Records 2, Generation counter 2 00:23:05.155 =====Discovery Log Entry 0====== 00:23:05.155 trtype: tcp 00:23:05.155 adrfam: ipv4 00:23:05.155 subtype: current discovery subsystem 00:23:05.155 treq: not specified, sq flow control disable supported 00:23:05.155 portid: 1 00:23:05.155 trsvcid: 4420 00:23:05.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:05.155 traddr: 10.0.0.1 00:23:05.155 eflags: none 00:23:05.155 sectype: none 00:23:05.155 =====Discovery Log Entry 1====== 00:23:05.155 trtype: tcp 00:23:05.155 adrfam: ipv4 00:23:05.155 subtype: nvme subsystem 00:23:05.155 treq: not specified, sq flow control disable supported 00:23:05.155 portid: 1 00:23:05.155 trsvcid: 4420 00:23:05.155 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:05.155 traddr: 10.0.0.1 00:23:05.155 eflags: none 00:23:05.155 sectype: none 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.155 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.413 nvme0n1 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.413 00:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.671 nvme0n1 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.671 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.930 nvme0n1 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.930 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.931 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.931 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.931 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.190 nvme0n1 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.190 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.449 nvme0n1 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.449 00:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.449 nvme0n1 00:23:06.449 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.449 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.449 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.449 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.450 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.450 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 nvme0n1 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.967 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 nvme0n1 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 nvme0n1 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.483 00:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.483 nvme0n1 00:23:07.483 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.483 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.483 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.483 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.483 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.740 nvme0n1 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.740 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.741 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.998 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.257 nvme0n1 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.257 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.258 00:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.552 nvme0n1 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.553 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.811 nvme0n1 00:23:08.811 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.811 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.811 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.811 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.811 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.811 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.069 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 nvme0n1 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:09.327 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.328 00:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.586 nvme0n1 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.586 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.150 nvme0n1 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.150 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.408 00:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.974 nvme0n1 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.974 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.539 nvme0n1 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.539 00:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:11.539 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.540 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.110 nvme0n1 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.110 00:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.675 nvme0n1 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.675 00:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.048 nvme0n1 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.048 00:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 nvme0n1 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:14.982 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.983 00:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.915 nvme0n1 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.915 00:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.850 nvme0n1 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.850 00:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.222 nvme0n1 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.222 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.223 nvme0n1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.223 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.482 nvme0n1 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:18.482 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.483 00:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.741 nvme0n1 00:23:18.741 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.741 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.741 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.741 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.741 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.742 nvme0n1 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.742 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 nvme0n1 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.999 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.000 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.257 nvme0n1 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.257 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.516 00:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.516 nvme0n1 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.516 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.776 nvme0n1 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.776 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.035 nvme0n1 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:20.035 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.292 nvme0n1 00:23:20.292 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.293 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.549 00:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 nvme0n1 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.806 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.064 nvme0n1 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.064 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.065 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.321 nvme0n1 00:23:21.321 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.321 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.321 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.321 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.321 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.321 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.582 00:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.857 nvme0n1 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.857 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.123 nvme0n1 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.123 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.124 00:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.691 nvme0n1 00:23:22.691 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.691 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.691 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.691 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.691 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.691 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.951 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.952 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.952 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.952 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.952 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.952 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.952 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.522 nvme0n1 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.522 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.523 00:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.092 nvme0n1 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.092 00:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 nvme0n1 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.659 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.660 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.660 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.660 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.229 nvme0n1 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.229 00:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.164 nvme0n1 00:23:26.164 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.164 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.164 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.164 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.164 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.164 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.424 00:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.363 nvme0n1 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.363 00:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.297 nvme0n1 00:23:28.297 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.297 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.297 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.297 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.297 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.298 00:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.679 nvme0n1 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.679 00:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.242 nvme0n1 00:23:30.242 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.242 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.242 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.242 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.242 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.242 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.498 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.499 00:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.499 nvme0n1 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.499 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.758 nvme0n1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.758 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.016 nvme0n1 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.016 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.017 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.276 nvme0n1 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.276 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.536 nvme0n1 00:23:31.536 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.536 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.536 00:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.536 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.536 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.536 00:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.536 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.796 nvme0n1 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.796 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.797 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.797 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.797 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.797 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.797 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.797 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.057 nvme0n1 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.057 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.316 nvme0n1 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.316 00:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.573 nvme0n1 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.573 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.831 nvme0n1 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.831 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.832 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.089 nvme0n1 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.089 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.347 00:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.604 nvme0n1 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.604 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.860 nvme0n1 00:23:33.860 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.860 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.860 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.860 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.860 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.860 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.861 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.429 nvme0n1 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.429 00:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.689 nvme0n1 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.689 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.690 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.259 nvme0n1 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.259 00:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 nvme0n1 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.858 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.859 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.859 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.859 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.859 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.859 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.859 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.859 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.432 nvme0n1 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.432 00:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 nvme0n1 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.000 00:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.564 nvme0n1 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.564 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQyMzAzZWU5MTAzNGY2MmFhMjA5YjQ3ZGNjZjFjYzX4NktH: 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: ]] 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5YTQ5ZjA5OWEzZjgxZDg0ODY2ZGVlN2Q5NGZiOTRjZTU4NjJlMmQ3NGRlYjI1MWZkMjhlZDYwZjAyOTk2M0VoevY=: 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.565 00:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.944 nvme0n1 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.944 00:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.512 nvme0n1 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.512 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmFkMDQxNDVkYjZiODhhOTJhOTIzMTYzMmRlOTc2MzXJYkR2: 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: ]] 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQ1MjM0NWYwZjkwMzFiYzA0MmZkZDJlMDNmYmY3OGVmfnVx: 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.772 00:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.712 nvme0n1 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWQ2YzA3MzkxMzYzNDE4YTdiMjNhYmMzOGQ2ZmRmYWZiYjFjNTQ5Y2ZhYjg4YzE349oBTA==: 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdhYTUyM2FkMWI5YjMwYWMwYjBiNjgwNjBhZTVkZDD10hRh: 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.712 00:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 nvme0n1 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTkyMTViZGE5NzhjN2ZkOTIzZjk4MzFkMjEwYmEyMDM5YjgyOGY5NWRjOGI3MDBhZmQ5M2MzOTJmZmY2ZWNjZPRZud8=: 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.652 00:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.591 nvme0n1 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDVmMGYzZmI4ZjMyMjBmYzZjZDk5ODExOWI0ZjUwMTgwNDdhOWY0YTJhODM2ZTdjEZlrvg==: 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjIyMTEyMDkxYjIzYTVjYTZhNjA0ZTBkN2YzOTg3NzEwM2VlZTRiYTkwNGMyYTZjOSBuAg==: 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.591 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.849 request: 00:23:42.849 { 00:23:42.849 "name": "nvme0", 00:23:42.849 "trtype": "tcp", 00:23:42.849 "traddr": "10.0.0.1", 00:23:42.849 "adrfam": "ipv4", 00:23:42.849 "trsvcid": "4420", 00:23:42.849 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:42.849 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:42.849 "prchk_reftag": false, 00:23:42.849 "prchk_guard": false, 00:23:42.849 "hdgst": false, 00:23:42.849 "ddgst": false, 00:23:42.849 "method": "bdev_nvme_attach_controller", 00:23:42.849 "req_id": 1 00:23:42.849 } 00:23:42.849 Got JSON-RPC error response 00:23:42.849 response: 00:23:42.849 { 00:23:42.849 "code": -5, 00:23:42.849 "message": "Input/output error" 00:23:42.849 } 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.849 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.850 request: 00:23:42.850 { 00:23:42.850 "name": "nvme0", 00:23:42.850 "trtype": "tcp", 00:23:42.850 "traddr": "10.0.0.1", 00:23:42.850 "adrfam": "ipv4", 00:23:42.850 "trsvcid": "4420", 00:23:42.850 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:42.850 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:42.850 "prchk_reftag": false, 00:23:42.850 "prchk_guard": false, 00:23:42.850 "hdgst": false, 00:23:42.850 "ddgst": false, 00:23:42.850 "dhchap_key": "key2", 00:23:42.850 "method": "bdev_nvme_attach_controller", 00:23:42.850 "req_id": 1 00:23:42.850 } 00:23:42.850 Got JSON-RPC error response 00:23:42.850 response: 00:23:42.850 { 00:23:42.850 "code": -5, 00:23:42.850 "message": "Input/output error" 00:23:42.850 } 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.850 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.109 request: 00:23:43.109 { 00:23:43.109 "name": "nvme0", 00:23:43.109 "trtype": "tcp", 00:23:43.109 "traddr": "10.0.0.1", 00:23:43.109 "adrfam": "ipv4", 00:23:43.109 "trsvcid": "4420", 00:23:43.109 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:43.109 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:43.109 "prchk_reftag": false, 00:23:43.109 "prchk_guard": false, 00:23:43.109 "hdgst": false, 00:23:43.109 "ddgst": false, 00:23:43.109 "dhchap_key": "key1", 00:23:43.109 "dhchap_ctrlr_key": "ckey2", 00:23:43.109 "method": "bdev_nvme_attach_controller", 00:23:43.109 "req_id": 1 00:23:43.109 } 00:23:43.109 Got JSON-RPC error response 00:23:43.109 response: 00:23:43.109 { 00:23:43.109 "code": -5, 00:23:43.109 "message": "Input/output error" 00:23:43.109 } 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.109 rmmod nvme_tcp 00:23:43.109 rmmod nvme_fabrics 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2370154 ']' 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2370154 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2370154 ']' 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2370154 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2370154 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2370154' 00:23:43.109 killing process with pid 2370154 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2370154 00:23:43.109 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2370154 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.370 00:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:45.275 00:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:46.654 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:46.654 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:46.654 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:46.654 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:46.654 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:46.654 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:46.654 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:46.654 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:46.654 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:46.654 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:46.654 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:46.654 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:46.654 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:46.654 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:46.654 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:46.913 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:47.851 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:47.851 00:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Hxe /tmp/spdk.key-null.sP6 /tmp/spdk.key-sha256.hyp /tmp/spdk.key-sha384.2BU /tmp/spdk.key-sha512.IoP /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:47.851 00:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:48.788 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:48.788 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:48.788 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:48.788 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:48.788 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:49.046 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:49.046 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:49.046 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:49.046 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:49.046 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:49.046 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:49.046 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:49.046 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:49.046 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:49.046 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:49.046 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:49.046 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:49.046 00:23:49.046 real 0m50.867s 00:23:49.046 user 0m48.832s 00:23:49.046 sys 0m5.880s 00:23:49.046 00:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.046 00:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.046 ************************************ 00:23:49.046 END TEST nvmf_auth_host 00:23:49.046 ************************************ 00:23:49.046 00:27:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:49.046 00:27:40 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:49.046 00:27:40 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:49.046 00:27:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:49.046 00:27:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.046 00:27:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.047 ************************************ 00:23:49.047 START TEST nvmf_digest 00:23:49.047 ************************************ 00:23:49.047 00:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:49.305 * Looking for test storage... 00:23:49.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.305 00:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.305 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:49.305 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.305 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.305 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.305 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.305 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.306 00:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:51.278 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:51.278 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:51.278 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:51.278 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:23:51.278 00:23:51.278 --- 10.0.0.2 ping statistics --- 00:23:51.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.278 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:23:51.278 00:23:51.278 --- 10.0.0.1 ping statistics --- 00:23:51.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.278 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:51.278 ************************************ 00:23:51.278 START TEST nvmf_digest_clean 00:23:51.278 ************************************ 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:51.278 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2380380 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2380380 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2380380 ']' 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.279 00:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:51.279 [2024-07-16 00:27:42.831886] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:23:51.279 [2024-07-16 00:27:42.831980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.279 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.536 [2024-07-16 00:27:42.903038] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.536 [2024-07-16 00:27:43.021747] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.536 [2024-07-16 00:27:43.021811] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.536 [2024-07-16 00:27:43.021828] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.536 [2024-07-16 00:27:43.021841] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.536 [2024-07-16 00:27:43.021862] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.536 [2024-07-16 00:27:43.021902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.471 null0 00:23:52.471 [2024-07-16 00:27:43.937212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.471 [2024-07-16 00:27:43.961429] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2380533 00:23:52.471 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:52.472 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2380533 /var/tmp/bperf.sock 00:23:52.472 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2380533 ']' 00:23:52.472 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:52.472 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.472 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:52.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:52.472 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.472 00:27:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.472 [2024-07-16 00:27:44.012151] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:23:52.472 [2024-07-16 00:27:44.012232] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380533 ] 00:23:52.472 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.472 [2024-07-16 00:27:44.080630] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.730 [2024-07-16 00:27:44.202197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.730 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.730 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:52.730 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:52.730 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:52.730 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:52.988 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:52.989 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.554 nvme0n1 00:23:53.554 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:53.554 00:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:53.554 Running I/O for 2 seconds... 00:23:55.451 00:23:55.451 Latency(us) 00:23:55.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.451 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:55.451 nvme0n1 : 2.00 19539.02 76.32 0.00 0.00 6543.21 2730.67 11650.84 00:23:55.451 =================================================================================================================== 00:23:55.451 Total : 19539.02 76.32 0.00 0.00 6543.21 2730.67 11650.84 00:23:55.451 0 00:23:55.708 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:55.708 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:55.708 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:55.708 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:55.708 | select(.opcode=="crc32c") 00:23:55.708 | "\(.module_name) \(.executed)"' 00:23:55.708 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2380533 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2380533 ']' 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2380533 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2380533 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2380533' 00:23:55.966 killing process with pid 2380533 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2380533 00:23:55.966 Received shutdown signal, test time was about 2.000000 seconds 00:23:55.966 00:23:55.966 Latency(us) 00:23:55.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.966 =================================================================================================================== 00:23:55.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.966 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2380533 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2380943 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2380943 /var/tmp/bperf.sock 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2380943 ']' 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:56.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:56.223 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.224 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.224 [2024-07-16 00:27:47.692231] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:23:56.224 [2024-07-16 00:27:47.692309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380943 ] 00:23:56.224 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:56.224 Zero copy mechanism will not be used. 00:23:56.224 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.224 [2024-07-16 00:27:47.749569] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.481 [2024-07-16 00:27:47.858334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.481 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.481 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:56.481 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:56.481 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:56.481 00:27:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:56.738 00:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.738 00:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.994 nvme0n1 00:23:56.994 00:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:56.994 00:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:57.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:57.251 Zero copy mechanism will not be used. 00:23:57.251 Running I/O for 2 seconds... 00:23:59.150 00:23:59.150 Latency(us) 00:23:59.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.150 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:59.150 nvme0n1 : 2.00 2554.05 319.26 0.00 0.00 6259.73 5606.97 14563.56 00:23:59.150 =================================================================================================================== 00:23:59.150 Total : 2554.05 319.26 0.00 0.00 6259.73 5606.97 14563.56 00:23:59.150 0 00:23:59.150 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:59.150 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:59.150 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:59.150 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:59.150 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:59.150 | select(.opcode=="crc32c") 00:23:59.150 | "\(.module_name) \(.executed)"' 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2380943 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2380943 ']' 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2380943 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2380943 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2380943' 00:23:59.408 killing process with pid 2380943 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2380943 00:23:59.408 Received shutdown signal, test time was about 2.000000 seconds 00:23:59.408 00:23:59.408 Latency(us) 00:23:59.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.408 =================================================================================================================== 00:23:59.408 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.408 00:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2380943 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2381343 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2381343 /var/tmp/bperf.sock 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2381343 ']' 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:59.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.666 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:59.923 [2024-07-16 00:27:51.280658] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:23:59.924 [2024-07-16 00:27:51.280738] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381343 ] 00:23:59.924 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.924 [2024-07-16 00:27:51.341371] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.924 [2024-07-16 00:27:51.456425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.924 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.924 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:59.924 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:59.924 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:59.924 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:00.489 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:00.489 00:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:00.746 nvme0n1 00:24:00.746 00:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:00.746 00:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:01.004 Running I/O for 2 seconds... 00:24:02.900 00:24:02.900 Latency(us) 00:24:02.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.900 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:02.900 nvme0n1 : 2.01 21181.68 82.74 0.00 0.00 6033.04 2985.53 10777.03 00:24:02.900 =================================================================================================================== 00:24:02.900 Total : 21181.68 82.74 0.00 0.00 6033.04 2985.53 10777.03 00:24:02.900 0 00:24:02.900 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:02.900 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:02.900 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:02.900 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:02.900 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:02.900 | select(.opcode=="crc32c") 00:24:02.900 | "\(.module_name) \(.executed)"' 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2381343 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2381343 ']' 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2381343 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2381343 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2381343' 00:24:03.158 killing process with pid 2381343 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2381343 00:24:03.158 Received shutdown signal, test time was about 2.000000 seconds 00:24:03.158 00:24:03.158 Latency(us) 00:24:03.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.158 =================================================================================================================== 00:24:03.158 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.158 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2381343 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2381753 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2381753 /var/tmp/bperf.sock 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2381753 ']' 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:03.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.414 00:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:03.672 [2024-07-16 00:27:55.037068] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:03.672 [2024-07-16 00:27:55.037144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381753 ] 00:24:03.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:03.672 Zero copy mechanism will not be used. 00:24:03.672 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.672 [2024-07-16 00:27:55.100983] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.672 [2024-07-16 00:27:55.217358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.672 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.672 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:03.672 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:03.672 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:03.672 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:04.237 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.237 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.494 nvme0n1 00:24:04.494 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:04.494 00:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:04.494 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.494 Zero copy mechanism will not be used. 00:24:04.494 Running I/O for 2 seconds... 00:24:06.437 00:24:06.437 Latency(us) 00:24:06.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.437 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:06.437 nvme0n1 : 2.01 1802.00 225.25 0.00 0.00 8857.16 4320.52 11796.48 00:24:06.437 =================================================================================================================== 00:24:06.437 Total : 1802.00 225.25 0.00 0.00 8857.16 4320.52 11796.48 00:24:06.437 0 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:06.694 | select(.opcode=="crc32c") 00:24:06.694 | "\(.module_name) \(.executed)"' 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2381753 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2381753 ']' 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2381753 00:24:06.694 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:06.695 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.695 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2381753 00:24:06.952 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:06.952 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:06.952 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2381753' 00:24:06.952 killing process with pid 2381753 00:24:06.952 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2381753 00:24:06.952 Received shutdown signal, test time was about 2.000000 seconds 00:24:06.952 00:24:06.952 Latency(us) 00:24:06.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.952 =================================================================================================================== 00:24:06.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.952 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2381753 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2380380 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2380380 ']' 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2380380 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2380380 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2380380' 00:24:07.210 killing process with pid 2380380 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2380380 00:24:07.210 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2380380 00:24:07.469 00:24:07.469 real 0m16.145s 00:24:07.469 user 0m31.880s 00:24:07.469 sys 0m3.852s 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:07.469 ************************************ 00:24:07.469 END TEST nvmf_digest_clean 00:24:07.469 ************************************ 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:07.469 ************************************ 00:24:07.469 START TEST nvmf_digest_error 00:24:07.469 ************************************ 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2382312 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2382312 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2382312 ']' 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.469 00:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.469 [2024-07-16 00:27:59.023438] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:07.469 [2024-07-16 00:27:59.023534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.469 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.727 [2024-07-16 00:27:59.088496] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.727 [2024-07-16 00:27:59.202752] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.727 [2024-07-16 00:27:59.202806] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.727 [2024-07-16 00:27:59.202833] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.727 [2024-07-16 00:27:59.202845] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.727 [2024-07-16 00:27:59.202854] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.727 [2024-07-16 00:27:59.202901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.727 [2024-07-16 00:27:59.271498] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.727 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.986 null0 00:24:07.986 [2024-07-16 00:27:59.391576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.986 [2024-07-16 00:27:59.415811] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2382336 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2382336 /var/tmp/bperf.sock 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2382336 ']' 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:07.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.986 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:07.986 [2024-07-16 00:27:59.466892] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:07.986 [2024-07-16 00:27:59.466967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382336 ] 00:24:07.986 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.986 [2024-07-16 00:27:59.524660] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.244 [2024-07-16 00:27:59.634804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.244 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.244 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:08.244 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:08.244 00:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:08.501 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:08.501 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.501 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.501 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.501 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.501 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.758 nvme0n1 00:24:09.016 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:09.016 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.016 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.016 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.016 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:09.016 00:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.016 Running I/O for 2 seconds... 00:24:09.016 [2024-07-16 00:28:00.514784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.514843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.514866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.531716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.531752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.531772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.543309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.543344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.543364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.559536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.559577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.559597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.573501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.573534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.573552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.585433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.585479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.585499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.599758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.599793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.599812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.613459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.613492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.613511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.016 [2024-07-16 00:28:00.626334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.016 [2024-07-16 00:28:00.626370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.016 [2024-07-16 00:28:00.626389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.273 [2024-07-16 00:28:00.642430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.273 [2024-07-16 00:28:00.642467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.642486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.655743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.655777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.655797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.668119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.668149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.668183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.682017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.682047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.682063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.696696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.696729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.696758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.709970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.709999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.710031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.722335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.722368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.722387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.738046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.738076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.738093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.750286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.750319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.750337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.764868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.764909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.764928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.776639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.776675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.776694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.791571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.791606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.791625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.804246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.804280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.804298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.817946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.817976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.817993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.832171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.832219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.832237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.845992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.846024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.846040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.859818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.859852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.859870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.274 [2024-07-16 00:28:00.871896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.274 [2024-07-16 00:28:00.871945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.274 [2024-07-16 00:28:00.871962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.887636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.887672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.887691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.899567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.899603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.899622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.913573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.913608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.913628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.929041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.929073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.929094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.940412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.940446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.940464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.954802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.954834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.954852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.969857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.969899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.969919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.983428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.983462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.983480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:00.997350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:00.997384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:00.997403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:01.012359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:01.012404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:01.012422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:01.024074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:01.024104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:01.024135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:01.039161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.532 [2024-07-16 00:28:01.039199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.532 [2024-07-16 00:28:01.039216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.532 [2024-07-16 00:28:01.051868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.533 [2024-07-16 00:28:01.051929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.533 [2024-07-16 00:28:01.051946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.533 [2024-07-16 00:28:01.065991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.533 [2024-07-16 00:28:01.066022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.533 [2024-07-16 00:28:01.066039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.533 [2024-07-16 00:28:01.079080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.533 [2024-07-16 00:28:01.079109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.533 [2024-07-16 00:28:01.079126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.533 [2024-07-16 00:28:01.094241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.533 [2024-07-16 00:28:01.094274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.533 [2024-07-16 00:28:01.094292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.533 [2024-07-16 00:28:01.107565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.533 [2024-07-16 00:28:01.107598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.533 [2024-07-16 00:28:01.107616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.533 [2024-07-16 00:28:01.120814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.533 [2024-07-16 00:28:01.120847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.533 [2024-07-16 00:28:01.120865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.533 [2024-07-16 00:28:01.134570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.533 [2024-07-16 00:28:01.134603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.533 [2024-07-16 00:28:01.134621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.148855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.148899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.148935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.160526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.160560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.160579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.174563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.174597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.174616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.189293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.189326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.189345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.200961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.200991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.201007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.215668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.215701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.215719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.229104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.229134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.229151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.243371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.243405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.243424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.255627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.255660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.255678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.270529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.270563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.270581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.283959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.283989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.284011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.297414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.297449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.297467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.311335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.311368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.311386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.325598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.325631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.325649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.339310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.339343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.339361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.352967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.352997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.353013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.365241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.365273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.365291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.379106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.379136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.379153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.791 [2024-07-16 00:28:01.393505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:09.791 [2024-07-16 00:28:01.393538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.791 [2024-07-16 00:28:01.393557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.407440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.407476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.407495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.422327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.422362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.422381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.435187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.435221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.435240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.450802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.450835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.450854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.462481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.462514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.462533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.476107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.476138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.476179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.490656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.490690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.490708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.505014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.505045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.505061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.517327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.517385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.530704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.530736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.530755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.546000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.546030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.546047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.557950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.557979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.558012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.573943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.573972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.574004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.585417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.585450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.585467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.601971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.602001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.602017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.616788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.616821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.616840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.628690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.628721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.628740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.643807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.643845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.643865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.048 [2024-07-16 00:28:01.656857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.048 [2024-07-16 00:28:01.656900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-16 00:28:01.656933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.672380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.672416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.672435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.684317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.684352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.684370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.699450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.699484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.699502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.714991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.715049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.728259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.728292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.728310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.741874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.741931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.741949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.753233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.753266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.753284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.768459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.768492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.768511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.781974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.782004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.782020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.795272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.795304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.795323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.809458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.809492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.809510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.821815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.821849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.821868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.836804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.836838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.836857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.850761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.850794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.850812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.862940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.862984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.863001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.878291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.878325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.878350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.892027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.892059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.892075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.905297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.905330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.905349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.306 [2024-07-16 00:28:01.918860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.306 [2024-07-16 00:28:01.918906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.306 [2024-07-16 00:28:01.918941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.564 [2024-07-16 00:28:01.931695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.564 [2024-07-16 00:28:01.931731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.564 [2024-07-16 00:28:01.931750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.564 [2024-07-16 00:28:01.947301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.564 [2024-07-16 00:28:01.947336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.564 [2024-07-16 00:28:01.947354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.564 [2024-07-16 00:28:01.960775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.564 [2024-07-16 00:28:01.960809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.564 [2024-07-16 00:28:01.960827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.564 [2024-07-16 00:28:01.973045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.564 [2024-07-16 00:28:01.973073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.564 [2024-07-16 00:28:01.973103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.564 [2024-07-16 00:28:01.987337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:01.987371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:01.987389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.001993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.002025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.002056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.016340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.016373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.016391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.029452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.029485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.029503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.042012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.042040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.042070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.055788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.055821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.055840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.070759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.070794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.070812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.083751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.083786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.083805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.097529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.097563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.097581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.111935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.111964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.111979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.126139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.126186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.126205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.138044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.138074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.138091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.153119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.153165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.565 [2024-07-16 00:28:02.166379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.565 [2024-07-16 00:28:02.166412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.565 [2024-07-16 00:28:02.166431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.181254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.181292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.823 [2024-07-16 00:28:02.181321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.194793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.194827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.823 [2024-07-16 00:28:02.194846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.207130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.207166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.823 [2024-07-16 00:28:02.207197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.221239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.221274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.823 [2024-07-16 00:28:02.221292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.235374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.235413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.823 [2024-07-16 00:28:02.235433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.249104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.249133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.823 [2024-07-16 00:28:02.249149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.263643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.263677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.823 [2024-07-16 00:28:02.263696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.823 [2024-07-16 00:28:02.276932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.823 [2024-07-16 00:28:02.276961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.276978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.288867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.288924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.288940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.304070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.304099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.304116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.317644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.317677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.317695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.330303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.330336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.330354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.346297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.346332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.346351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.357803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.357837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.357855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.372937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.372967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.372983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.388356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.388390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.388408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.401235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.401268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.401286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.414606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.414639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.414657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.824 [2024-07-16 00:28:02.431350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:10.824 [2024-07-16 00:28:02.431383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.824 [2024-07-16 00:28:02.431402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.081 [2024-07-16 00:28:02.443511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:11.081 [2024-07-16 00:28:02.443547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.081 [2024-07-16 00:28:02.443567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.082 [2024-07-16 00:28:02.458786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:11.082 [2024-07-16 00:28:02.458820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.082 [2024-07-16 00:28:02.458839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.082 [2024-07-16 00:28:02.472997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:11.082 [2024-07-16 00:28:02.473030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.082 [2024-07-16 00:28:02.473055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.082 [2024-07-16 00:28:02.485221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:11.082 [2024-07-16 00:28:02.485255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.082 [2024-07-16 00:28:02.485273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.082 [2024-07-16 00:28:02.498023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b6f00) 00:24:11.082 [2024-07-16 00:28:02.498053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.082 [2024-07-16 00:28:02.498069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.082 00:24:11.082 Latency(us) 00:24:11.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.082 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:11.082 nvme0n1 : 2.01 18471.26 72.15 0.00 0.00 6920.26 3203.98 19029.71 00:24:11.082 =================================================================================================================== 00:24:11.082 Total : 18471.26 72.15 0.00 0.00 6920.26 3203.98 19029.71 00:24:11.082 0 00:24:11.082 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:11.082 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:11.082 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:11.082 | .driver_specific 00:24:11.082 | .nvme_error 00:24:11.082 | .status_code 00:24:11.082 | .command_transient_transport_error' 00:24:11.082 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2382336 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2382336 ']' 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2382336 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2382336 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2382336' 00:24:11.343 killing process with pid 2382336 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2382336 00:24:11.343 Received shutdown signal, test time was about 2.000000 seconds 00:24:11.343 00:24:11.343 Latency(us) 00:24:11.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.343 =================================================================================================================== 00:24:11.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.343 00:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2382336 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2382746 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2382746 /var/tmp/bperf.sock 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2382746 ']' 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.601 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.602 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.602 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.602 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.602 [2024-07-16 00:28:03.104211] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:11.602 [2024-07-16 00:28:03.104328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382746 ] 00:24:11.602 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:11.602 Zero copy mechanism will not be used. 00:24:11.602 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.602 [2024-07-16 00:28:03.167751] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.858 [2024-07-16 00:28:03.278992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.858 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.858 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:11.858 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:11.858 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.115 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:12.115 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.115 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.115 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.115 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.115 00:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.678 nvme0n1 00:24:12.678 00:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:12.678 00:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.678 00:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.678 00:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.678 00:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:12.678 00:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:12.678 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.678 Zero copy mechanism will not be used. 00:24:12.678 Running I/O for 2 seconds... 00:24:12.935 [2024-07-16 00:28:04.313462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.313520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.313543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.326362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.326398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.326417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.339294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.339328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.339347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.352102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.352131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.352166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.364891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.364944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.364960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.377563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.377597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.377616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.390322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.390355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.390373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.402940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.402983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.402999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.415631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.415664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.415682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.428560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.428593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.428612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.441321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.441372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.453983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.454011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.454042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.466747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.466779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.466797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.479344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.479377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.479395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.492103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.492133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.492150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.504708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.504741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.504766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.517422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.517455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.517473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.530051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.530078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.530110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:12.936 [2024-07-16 00:28:04.542873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:12.936 [2024-07-16 00:28:04.542915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.936 [2024-07-16 00:28:04.542934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.555586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.555623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.555642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.568383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.568418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.568437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.581218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.581252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.581271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.593819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.593853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.593871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.606602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.606634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.606652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.619479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.619517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.619536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.632056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.632085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.632102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.644619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.644651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.644670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.657361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.657393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.657410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.669998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.670027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.670058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.682806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.682839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.682857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.695326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.695358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.695376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.707916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.707945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.707961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.720515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.720547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.720565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.733191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.733236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.733255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.746018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.746047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.746063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.758693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.758725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.758742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.771193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.771239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.771257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.783845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.783887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.783907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.194 [2024-07-16 00:28:04.796553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.194 [2024-07-16 00:28:04.796586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.194 [2024-07-16 00:28:04.796604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.809280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.809312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.809330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.821860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.821902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.821921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.834611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.834645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.834671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.847408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.847440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.847458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.860009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.860038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.860070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.872646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.872678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.872697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.885417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.885450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.885468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.898070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.898098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.898132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.910619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.910651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.910669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.923259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.923291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.923310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.935828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.935860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.935885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.948517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.948549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.948567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.961207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.452 [2024-07-16 00:28:04.961238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.452 [2024-07-16 00:28:04.961257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.452 [2024-07-16 00:28:04.973972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:04.974016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:04.974032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.453 [2024-07-16 00:28:04.986587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:04.986620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:04.986638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.453 [2024-07-16 00:28:04.999468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:04.999500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:04.999517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.453 [2024-07-16 00:28:05.012345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:05.012378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:05.012396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.453 [2024-07-16 00:28:05.025036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:05.025064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:05.025095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.453 [2024-07-16 00:28:05.037897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:05.037942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:05.037958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.453 [2024-07-16 00:28:05.050547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:05.050579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:05.050603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.453 [2024-07-16 00:28:05.063280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.453 [2024-07-16 00:28:05.063324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.453 [2024-07-16 00:28:05.063357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.076103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.076149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.076166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.088842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.088884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.088905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.101716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.101749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.101768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.114664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.114696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.114714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.127394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.127426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.127444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.140081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.140109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.140141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.152739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.152771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.152789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.165339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.165377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.165396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.177918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.177946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.177977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.190579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.190611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.190629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.203217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.203260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.203279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.215793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.215825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.215843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.228588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.228620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.228638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.241275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.241307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.241325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.254047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.254076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.254092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.266640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.266672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.266691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.279276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.279308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.279327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.291919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.291948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.291964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.304566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.304598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.304615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.711 [2024-07-16 00:28:05.317403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.711 [2024-07-16 00:28:05.317436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.711 [2024-07-16 00:28:05.317454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.968 [2024-07-16 00:28:05.330079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.330112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.330130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.343106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.343137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.343170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.356764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.356794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.356826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.370951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.370982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.370999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.384764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.384797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.384821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.398888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.398933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.398950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.413455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.413490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.413509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.428466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.428519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.443548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.443596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.443615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.456975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.457005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.457021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.471249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.471279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.471312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.484586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.484620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.484639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.499639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.499672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.499690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.514417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.514458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.514478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.528720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.528753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.528771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.542506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.542538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.542555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.556616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.556648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.556666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.969 [2024-07-16 00:28:05.570354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:13.969 [2024-07-16 00:28:05.570387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.969 [2024-07-16 00:28:05.570405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.226 [2024-07-16 00:28:05.585111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.585144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.585171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.600078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.600109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.600142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.614146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.614181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.614212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.626398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.626432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.626456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.639215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.639257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.639273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.652051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.652079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.652095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.664884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.664931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.664946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.677522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.677555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.677573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.690335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.690367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.690385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.703024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.703051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.703083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.715599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.715632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.715651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.728213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.728260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.728278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.740809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.740848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.740867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.753631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.753664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.753682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.766352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.766384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.766402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.778952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.778980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.779011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.791807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.791840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.791857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.804474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.804506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.804524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.816996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.817024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.817055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.227 [2024-07-16 00:28:05.829645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.227 [2024-07-16 00:28:05.829677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-07-16 00:28:05.829695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.842503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.842539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.842560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.855783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.855817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.855836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.868771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.868804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.868822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.881565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.881597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.881615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.894392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.894424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.894442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.906966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.907008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.907024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.919624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.919656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.919675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.932220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.932252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.932270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.486 [2024-07-16 00:28:05.944785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.486 [2024-07-16 00:28:05.944818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.486 [2024-07-16 00:28:05.944836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:05.957448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:05.957480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:05.957504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:05.970147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:05.970176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:05.970211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:05.982795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:05.982829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:05.982847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:05.995470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:05.995502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:05.995521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.008242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.008274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.008292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.020957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.020985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.021001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.033595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.033627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.033645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.046276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.046308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.046326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.058818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.058850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.058868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.071577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.071615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.071635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.084240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.084273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.084290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.487 [2024-07-16 00:28:06.097130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.487 [2024-07-16 00:28:06.097162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.487 [2024-07-16 00:28:06.097179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.110071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.110103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.110121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.122582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.122615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.122634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.135530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.135564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.135582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.148203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.148245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.148260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.160873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.160927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.160943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.173632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.173663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.173681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.186356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.186388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.186406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.199206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.199239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.199257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.211796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.211828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.211846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.224598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.224630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.224647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.237352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.237383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.237401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.250245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.250276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.250295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.262827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.262858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.262885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.275651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.275684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.275702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.288596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.288630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.288654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.745 [2024-07-16 00:28:06.301244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x70e6b0) 00:24:14.745 [2024-07-16 00:28:06.301276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.745 [2024-07-16 00:28:06.301294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.745 00:24:14.745 Latency(us) 00:24:14.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.745 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:14.745 nvme0n1 : 2.01 2395.78 299.47 0.00 0.00 6671.69 6068.15 15825.73 00:24:14.745 =================================================================================================================== 00:24:14.745 Total : 2395.78 299.47 0.00 0.00 6671.69 6068.15 15825.73 00:24:14.745 0 00:24:14.745 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:14.745 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:14.745 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:14.745 | .driver_specific 00:24:14.745 | .nvme_error 00:24:14.745 | .status_code 00:24:14.745 | .command_transient_transport_error' 00:24:14.745 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2382746 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2382746 ']' 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2382746 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2382746 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.003 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2382746' 00:24:15.003 killing process with pid 2382746 00:24:15.261 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2382746 00:24:15.261 Received shutdown signal, test time was about 2.000000 seconds 00:24:15.261 00:24:15.261 Latency(us) 00:24:15.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.261 =================================================================================================================== 00:24:15.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.261 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2382746 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2383275 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2383275 /var/tmp/bperf.sock 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2383275 ']' 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:15.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.519 00:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.519 [2024-07-16 00:28:06.930165] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:15.519 [2024-07-16 00:28:06.930252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383275 ] 00:24:15.519 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.520 [2024-07-16 00:28:06.988126] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.520 [2024-07-16 00:28:07.096722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.778 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.778 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:15.778 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:15.778 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.036 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:16.036 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.036 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.036 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.036 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.036 00:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.601 nvme0n1 00:24:16.601 00:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:16.601 00:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.601 00:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.601 00:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.601 00:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:16.601 00:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:16.601 Running I/O for 2 seconds... 00:24:16.601 [2024-07-16 00:28:08.189585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.601 [2024-07-16 00:28:08.189992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.601 [2024-07-16 00:28:08.190031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.601 [2024-07-16 00:28:08.203942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.601 [2024-07-16 00:28:08.204220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.601 [2024-07-16 00:28:08.204254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.218472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.218831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.218865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.232587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.232856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.232896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.246730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.247104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.247134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.260745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.261127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.261157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.274743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.275125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.275154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.288726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.289016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.289045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.302702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.303084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.303113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.316716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.317083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.317112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.330743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.331115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.331144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.344705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.345073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.345102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.358625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.358990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.359033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.372637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.373005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.373034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.386641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.387019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.387048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.400672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.401037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.401065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.414582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.414951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.414995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.428576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.428943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.428986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.442579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.442951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.442980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.456708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.457086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.457115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.860 [2024-07-16 00:28:08.470948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:16.860 [2024-07-16 00:28:08.471312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.860 [2024-07-16 00:28:08.471363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.485219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.485571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.485604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.499256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.499611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.499643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.513213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.513562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.513594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.527193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.527541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.527573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.541235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.541613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.541653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.555245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.555596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.555628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.569241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.569595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.569626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.583169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.583533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.583564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.597123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.597492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.597523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.611068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.611436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.611468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.625004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.625358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.625389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.638981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.639353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.639384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.652821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.653171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.653215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.666415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.666774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.666818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.680535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.680891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.680936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.694596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.694967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.694996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.708727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.709083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.709112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.120 [2024-07-16 00:28:08.722895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.120 [2024-07-16 00:28:08.723263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.120 [2024-07-16 00:28:08.723294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.737142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.737525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.737559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.751100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.751469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.751501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.765140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.765512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.765544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.779105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.779479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.779511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.793105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.793477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.793509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.807147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.807434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.807466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.821140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.821501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.821533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.835137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.835503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.835534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.849258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.849607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.849638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.863247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.863513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.863545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.877227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.877578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.877609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.891127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.891496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.891528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.905123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.905479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.905516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.919063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.919352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.919383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.933006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.933370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.933401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.946988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.947367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.947398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.961077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.961436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.961468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.975144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.975517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.975548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.379 [2024-07-16 00:28:08.989221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.379 [2024-07-16 00:28:08.989571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.379 [2024-07-16 00:28:08.989603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.637 [2024-07-16 00:28:09.003390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.637 [2024-07-16 00:28:09.003739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.637 [2024-07-16 00:28:09.003772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.637 [2024-07-16 00:28:09.017347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.637 [2024-07-16 00:28:09.017697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.637 [2024-07-16 00:28:09.017729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.637 [2024-07-16 00:28:09.031337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.637 [2024-07-16 00:28:09.031692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.637 [2024-07-16 00:28:09.031723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.637 [2024-07-16 00:28:09.045271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.637 [2024-07-16 00:28:09.045615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.637 [2024-07-16 00:28:09.045647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.637 [2024-07-16 00:28:09.059352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.637 [2024-07-16 00:28:09.059704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.637 [2024-07-16 00:28:09.059735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.637 [2024-07-16 00:28:09.073338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.637 [2024-07-16 00:28:09.073692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.637 [2024-07-16 00:28:09.073724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.087316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.087661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.087693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.101353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.101618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.101650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.115496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.115851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.115891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.129398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.129751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.129782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.143432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.143782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.143813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.157392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.157745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.157777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.171393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.171771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.185326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.185677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.185709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.199304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.199652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.199682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.213403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.213755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.213786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.227607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.227883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.227929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.638 [2024-07-16 00:28:09.241563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.638 [2024-07-16 00:28:09.241935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.638 [2024-07-16 00:28:09.241963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.896 [2024-07-16 00:28:09.255936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.896 [2024-07-16 00:28:09.256325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.896 [2024-07-16 00:28:09.256357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.896 [2024-07-16 00:28:09.269862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.896 [2024-07-16 00:28:09.270275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.896 [2024-07-16 00:28:09.270312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.283871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.284248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.284280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.297773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.298155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.298183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.311804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.312156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.312202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.325783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.326164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.326193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.339761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.340145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.340189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.353715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.354082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.354110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.367739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.368090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.368119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.381720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.382077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.382109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.395652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.396027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.396056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.409550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.409818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.409849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.423509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.423860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.423915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.437409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.437765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.437796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.451368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.451715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.451746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.465475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.465855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.479554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.479928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.479956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.493560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.493931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.493959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.897 [2024-07-16 00:28:09.507652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:17.897 [2024-07-16 00:28:09.508097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.897 [2024-07-16 00:28:09.508140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.521932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.522236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.522264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.535906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.536298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.536329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.549934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.550212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.550255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.563856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.564233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.564265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.577834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.578201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.578228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.591756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.592142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.592170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.605757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.606149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.619674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.620049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.620077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.633633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.633997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.634031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.647671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.648035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.648064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.661642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.662010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.662039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.675543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.675901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.675946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.689425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.689777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.689807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.703394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.703745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.703775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.717490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.717839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.717871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.731605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.731983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.732011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.745603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.745969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.745997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.162 [2024-07-16 00:28:09.759500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.162 [2024-07-16 00:28:09.759857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.162 [2024-07-16 00:28:09.759897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.773432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.773787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.773820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.787621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.787993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.788022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.801597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.801968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.801997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.815729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.816093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.816121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.829966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.830312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.830344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.844143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.844506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.844537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.858207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.858553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.858584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.872177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.872540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.872575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.886158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.886512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.886544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.900137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.900509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.900540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.914128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.914492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.914524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.928152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.928511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.928542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.942114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.942478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.942509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.956130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.956518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.956549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.970301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.970652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.970683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.984451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.984805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.984836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:09.998358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:09.998714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:09.998745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:10.011756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:10.012099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:10.012132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:10.025160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:10.025486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:10.025518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.434 [2024-07-16 00:28:10.039136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.434 [2024-07-16 00:28:10.039494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.434 [2024-07-16 00:28:10.039528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.053900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.054268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.054302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.068559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.068827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.068860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.082816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.083199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.083227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.097191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.097543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.097575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.111489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.111841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.111872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.125589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.125863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.125899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.139714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.140108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.140137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.154016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.154305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.154337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 [2024-07-16 00:28:10.168060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12226a0) with pdu=0x2000190fb048 00:24:18.693 [2024-07-16 00:28:10.168425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.693 [2024-07-16 00:28:10.168456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.693 00:24:18.693 Latency(us) 00:24:18.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.693 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:18.693 nvme0n1 : 2.01 18143.18 70.87 0.00 0.00 7038.16 4733.16 15922.82 00:24:18.693 =================================================================================================================== 00:24:18.693 Total : 18143.18 70.87 0.00 0.00 7038.16 4733.16 15922.82 00:24:18.693 0 00:24:18.693 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:18.693 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:18.693 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:18.693 | .driver_specific 00:24:18.693 | .nvme_error 00:24:18.693 | .status_code 00:24:18.693 | .command_transient_transport_error' 00:24:18.693 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2383275 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2383275 ']' 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2383275 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383275 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383275' 00:24:18.951 killing process with pid 2383275 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2383275 00:24:18.951 Received shutdown signal, test time was about 2.000000 seconds 00:24:18.951 00:24:18.951 Latency(us) 00:24:18.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.951 =================================================================================================================== 00:24:18.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.951 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2383275 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2383687 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2383687 /var/tmp/bperf.sock 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2383687 ']' 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:19.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.209 00:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.209 [2024-07-16 00:28:10.797294] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:19.209 [2024-07-16 00:28:10.797371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383687 ] 00:24:19.209 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:19.209 Zero copy mechanism will not be used. 00:24:19.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.467 [2024-07-16 00:28:10.855260] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.467 [2024-07-16 00:28:10.963848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.467 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.467 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:19.467 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:19.467 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.031 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:20.031 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.031 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.031 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.031 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.031 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.287 nvme0n1 00:24:20.287 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:20.287 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.287 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.287 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.287 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:20.287 00:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:20.545 Zero copy mechanism will not be used. 00:24:20.545 Running I/O for 2 seconds... 00:24:20.545 [2024-07-16 00:28:11.994308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:11.994849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:11.994905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.012538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.013023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.013053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.031199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.031663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.031696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.049811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.050242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.050276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.067765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.068141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.068170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.086471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.087013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.087064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.105182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.105576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.105604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.124388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.124908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.124937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.545 [2024-07-16 00:28:12.144446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.545 [2024-07-16 00:28:12.144868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.545 [2024-07-16 00:28:12.144904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.163008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.163449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.163478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.182926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.183306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.183353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.202542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.202927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.202971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.221283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.221671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.221699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.239002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.239567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.239594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.257050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.257460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.257506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.274275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.274658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.274700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.292189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.292583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.292611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.310751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.311340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.311386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.330019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.330550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.330577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.349850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.350327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.350373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.369323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.369706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.369732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.386805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.387201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.387228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.802 [2024-07-16 00:28:12.405652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:20.802 [2024-07-16 00:28:12.406077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.802 [2024-07-16 00:28:12.406124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.423002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.423403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.423449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.441924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.442328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.442355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.458557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.459081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.459124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.477202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.477743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.477770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.496987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.497397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.497441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.515557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.516040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.516069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.534354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.534720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.534748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.551793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.552171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.552199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.569534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.570013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.570056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.588524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.589113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.589143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.607683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.608228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.608274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.625734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.626254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.626295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.645776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.646187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.646229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.061 [2024-07-16 00:28:12.663291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.061 [2024-07-16 00:28:12.663676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.061 [2024-07-16 00:28:12.663704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.681138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.681585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.681613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.701258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.701762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.701791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.720679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.721253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.721281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.739416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.739870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.739918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.758105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.758546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.758586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.776934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.777232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.777261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.795804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.796270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.796313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.816126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.816612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.816639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.835589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.836005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.836033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.319 [2024-07-16 00:28:12.854375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.319 [2024-07-16 00:28:12.854779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-07-16 00:28:12.854820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.320 [2024-07-16 00:28:12.872976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.320 [2024-07-16 00:28:12.873374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.320 [2024-07-16 00:28:12.873416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.320 [2024-07-16 00:28:12.892038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.320 [2024-07-16 00:28:12.892578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.320 [2024-07-16 00:28:12.892610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.320 [2024-07-16 00:28:12.911234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.320 [2024-07-16 00:28:12.911629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.320 [2024-07-16 00:28:12.911656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.320 [2024-07-16 00:28:12.929666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.320 [2024-07-16 00:28:12.930134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.320 [2024-07-16 00:28:12.930162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.578 [2024-07-16 00:28:12.949576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.578 [2024-07-16 00:28:12.950040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.578 [2024-07-16 00:28:12.950070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.578 [2024-07-16 00:28:12.968874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.578 [2024-07-16 00:28:12.969372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.578 [2024-07-16 00:28:12.969417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.578 [2024-07-16 00:28:12.988839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.578 [2024-07-16 00:28:12.989372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.578 [2024-07-16 00:28:12.989414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.578 [2024-07-16 00:28:13.008029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.578 [2024-07-16 00:28:13.008445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.578 [2024-07-16 00:28:13.008473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.028441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.028858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.028908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.047765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.048140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.048169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.066950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.067424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.067470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.084069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.084439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.084482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.102163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.102602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.102628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.120391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.120833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.120882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.139721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.140223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.140250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.158164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.158535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.158562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.579 [2024-07-16 00:28:13.177352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.579 [2024-07-16 00:28:13.177718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.579 [2024-07-16 00:28:13.177746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.196531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.196935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.196979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.214500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.215044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.215086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.232943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.233331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.233373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.252252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.252682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.252709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.269608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.270006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.270035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.286252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.286671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.286715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.304155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.304634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.304666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.322745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.323152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.323193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.341059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.341430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.341458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.360747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.361122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.361151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.379763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.380213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.380260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.398745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.399366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.399408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.417346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.417728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.417774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.837 [2024-07-16 00:28:13.437048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:21.837 [2024-07-16 00:28:13.437437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.837 [2024-07-16 00:28:13.437464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.455039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.455488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.455532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.475003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.475400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.475443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.495065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.495449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.495477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.512300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.512777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.512803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.531386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.531739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.531767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.549935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.550297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.550325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.569319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.569714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.569742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.588516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.588968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.589014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.608067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.608502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.608547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.625914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.626371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.626414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.646223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.646623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.646650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.665387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.665892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.665920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.685238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.685728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.095 [2024-07-16 00:28:13.703100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.095 [2024-07-16 00:28:13.703596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.095 [2024-07-16 00:28:13.703641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.720415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.720812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.720857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.739334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.739751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.739779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.757948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.758324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.758368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.777132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.777570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.777598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.795937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.796323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.796350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.815844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.816429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.816472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.835497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.835997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.836040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.854225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.854603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.854644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.873095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.873479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.873510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.893482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.893863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.893922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.913092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.913407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.913434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.931896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.353 [2024-07-16 00:28:13.932342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.353 [2024-07-16 00:28:13.932369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.353 [2024-07-16 00:28:13.951435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.354 [2024-07-16 00:28:13.951975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.354 [2024-07-16 00:28:13.952016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.611 [2024-07-16 00:28:13.971096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1057c90) with pdu=0x2000190fef90 00:24:22.611 [2024-07-16 00:28:13.971503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.611 [2024-07-16 00:28:13.971546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.611 00:24:22.611 Latency(us) 00:24:22.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.611 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:22.611 nvme0n1 : 2.01 1651.75 206.47 0.00 0.00 9662.73 3131.16 21359.88 00:24:22.611 =================================================================================================================== 00:24:22.611 Total : 1651.75 206.47 0.00 0.00 9662.73 3131.16 21359.88 00:24:22.611 0 00:24:22.611 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:22.611 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:22.611 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:22.611 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:22.611 | .driver_specific 00:24:22.611 | .nvme_error 00:24:22.611 | .status_code 00:24:22.611 | .command_transient_transport_error' 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 106 > 0 )) 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2383687 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2383687 ']' 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2383687 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383687 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383687' 00:24:22.870 killing process with pid 2383687 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2383687 00:24:22.870 Received shutdown signal, test time was about 2.000000 seconds 00:24:22.870 00:24:22.870 Latency(us) 00:24:22.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.870 =================================================================================================================== 00:24:22.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.870 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2383687 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2382312 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2382312 ']' 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2382312 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2382312 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2382312' 00:24:23.129 killing process with pid 2382312 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2382312 00:24:23.129 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2382312 00:24:23.387 00:24:23.387 real 0m15.871s 00:24:23.387 user 0m30.845s 00:24:23.387 sys 0m4.328s 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.387 ************************************ 00:24:23.387 END TEST nvmf_digest_error 00:24:23.387 ************************************ 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.387 rmmod nvme_tcp 00:24:23.387 rmmod nvme_fabrics 00:24:23.387 rmmod nvme_keyring 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2382312 ']' 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2382312 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2382312 ']' 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2382312 00:24:23.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2382312) - No such process 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2382312 is not found' 00:24:23.387 Process with pid 2382312 is not found 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.387 00:28:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.916 00:28:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.916 00:24:25.916 real 0m36.324s 00:24:25.916 user 1m3.496s 00:24:25.916 sys 0m9.697s 00:24:25.916 00:28:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.916 00:28:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:25.916 ************************************ 00:24:25.916 END TEST nvmf_digest 00:24:25.916 ************************************ 00:24:25.916 00:28:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.916 00:28:16 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:25.916 00:28:16 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:25.916 00:28:16 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:25.916 00:28:16 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:25.916 00:28:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.916 00:28:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.916 00:28:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.916 ************************************ 00:24:25.916 START TEST nvmf_bdevperf 00:24:25.916 ************************************ 00:24:25.916 00:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:25.916 * Looking for test storage... 00:24:25.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.917 00:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.818 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:27.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:27.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.819 00:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:27.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:27.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:24:27.819 00:24:27.819 --- 10.0.0.2 ping statistics --- 00:24:27.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.819 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:27.819 00:24:27.819 --- 10.0.0.1 ping statistics --- 00:24:27.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.819 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2386149 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2386149 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2386149 ']' 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.819 00:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:27.819 [2024-07-16 00:28:19.208747] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:27.819 [2024-07-16 00:28:19.208838] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.819 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.819 [2024-07-16 00:28:19.277554] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:27.819 [2024-07-16 00:28:19.394386] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.819 [2024-07-16 00:28:19.394456] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.819 [2024-07-16 00:28:19.394472] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.819 [2024-07-16 00:28:19.394486] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.819 [2024-07-16 00:28:19.394498] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.819 [2024-07-16 00:28:19.394599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.819 [2024-07-16 00:28:19.394830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.819 [2024-07-16 00:28:19.394833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.754 [2024-07-16 00:28:20.213804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.754 Malloc0 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.754 [2024-07-16 00:28:20.281530] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.754 { 00:24:28.754 "params": { 00:24:28.754 "name": "Nvme$subsystem", 00:24:28.754 "trtype": "$TEST_TRANSPORT", 00:24:28.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.754 "adrfam": "ipv4", 00:24:28.754 "trsvcid": "$NVMF_PORT", 00:24:28.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.754 "hdgst": ${hdgst:-false}, 00:24:28.754 "ddgst": ${ddgst:-false} 00:24:28.754 }, 00:24:28.754 "method": "bdev_nvme_attach_controller" 00:24:28.754 } 00:24:28.754 EOF 00:24:28.754 )") 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:28.754 00:28:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:28.754 "params": { 00:24:28.754 "name": "Nvme1", 00:24:28.754 "trtype": "tcp", 00:24:28.754 "traddr": "10.0.0.2", 00:24:28.754 "adrfam": "ipv4", 00:24:28.754 "trsvcid": "4420", 00:24:28.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.754 "hdgst": false, 00:24:28.754 "ddgst": false 00:24:28.754 }, 00:24:28.754 "method": "bdev_nvme_attach_controller" 00:24:28.754 }' 00:24:28.754 [2024-07-16 00:28:20.331075] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:28.754 [2024-07-16 00:28:20.331165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386305 ] 00:24:28.754 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.013 [2024-07-16 00:28:20.391344] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.013 [2024-07-16 00:28:20.508397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.270 Running I/O for 1 seconds... 00:24:30.642 00:24:30.642 Latency(us) 00:24:30.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.642 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:30.642 Verification LBA range: start 0x0 length 0x4000 00:24:30.642 Nvme1n1 : 1.01 8690.85 33.95 0.00 0.00 14670.46 958.77 19709.35 00:24:30.642 =================================================================================================================== 00:24:30.642 Total : 8690.85 33.95 0.00 0.00 14670.46 958.77 19709.35 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2386449 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.642 { 00:24:30.642 "params": { 00:24:30.642 "name": "Nvme$subsystem", 00:24:30.642 "trtype": "$TEST_TRANSPORT", 00:24:30.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.642 "adrfam": "ipv4", 00:24:30.642 "trsvcid": "$NVMF_PORT", 00:24:30.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.642 "hdgst": ${hdgst:-false}, 00:24:30.642 "ddgst": ${ddgst:-false} 00:24:30.642 }, 00:24:30.642 "method": "bdev_nvme_attach_controller" 00:24:30.642 } 00:24:30.642 EOF 00:24:30.642 )") 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:30.642 00:28:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:30.642 "params": { 00:24:30.642 "name": "Nvme1", 00:24:30.642 "trtype": "tcp", 00:24:30.642 "traddr": "10.0.0.2", 00:24:30.642 "adrfam": "ipv4", 00:24:30.642 "trsvcid": "4420", 00:24:30.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:30.642 "hdgst": false, 00:24:30.642 "ddgst": false 00:24:30.642 }, 00:24:30.642 "method": "bdev_nvme_attach_controller" 00:24:30.642 }' 00:24:30.642 [2024-07-16 00:28:22.141117] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:30.642 [2024-07-16 00:28:22.141201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386449 ] 00:24:30.642 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.642 [2024-07-16 00:28:22.203335] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.901 [2024-07-16 00:28:22.319418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.901 Running I/O for 15 seconds... 00:24:34.197 00:28:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2386149 00:24:34.197 00:28:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:34.197 [2024-07-16 00:28:25.109519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.197 [2024-07-16 00:28:25.109576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.109974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.109990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.197 [2024-07-16 00:28:25.110267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.197 [2024-07-16 00:28:25.110281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.110985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.111000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.111014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.111029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.111043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.198 [2024-07-16 00:28:25.111062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.198 [2024-07-16 00:28:25.111076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.199 [2024-07-16 00:28:25.111481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.199 [2024-07-16 00:28:25.111507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.199 [2024-07-16 00:28:25.111533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.199 [2024-07-16 00:28:25.111559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.199 [2024-07-16 00:28:25.111584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.199 [2024-07-16 00:28:25.111610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.199 [2024-07-16 00:28:25.111635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.199 [2024-07-16 00:28:25.111842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.199 [2024-07-16 00:28:25.111871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.111893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.111909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.111923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.111938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.111952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.111968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.111981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.111997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.200 [2024-07-16 00:28:25.112687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.200 [2024-07-16 00:28:25.112713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.200 [2024-07-16 00:28:25.112727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.201 [2024-07-16 00:28:25.112963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.112978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.112992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.201 [2024-07-16 00:28:25.113353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bac80 is same with the state(5) to be set 00:24:34.201 [2024-07-16 00:28:25.113381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.201 [2024-07-16 00:28:25.113392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.201 [2024-07-16 00:28:25.113403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51456 len:8 PRP1 0x0 PRP2 0x0 00:24:34.201 [2024-07-16 00:28:25.113414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113473] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26bac80 was disconnected and freed. reset controller. 00:24:34.201 [2024-07-16 00:28:25.113549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.201 [2024-07-16 00:28:25.113569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.201 [2024-07-16 00:28:25.113597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.201 [2024-07-16 00:28:25.113636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.201 [2024-07-16 00:28:25.113672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.201 [2024-07-16 00:28:25.113693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.201 [2024-07-16 00:28:25.116767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.201 [2024-07-16 00:28:25.116802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.201 [2024-07-16 00:28:25.117586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.201 [2024-07-16 00:28:25.117622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.117655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.117917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.118139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.118163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.118196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.121930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.130988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.131473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.131506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.131524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.131763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.132016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.132042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.132058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.135617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.144868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.145313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.145345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.145363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.145602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.145845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.145870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.145896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.149464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.158711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.159155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.159186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.159204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.159442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.159691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.159715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.159731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.163308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.172579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.173058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.173090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.173107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.173346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.173588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.173612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.173627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.177197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.186445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.186881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.186913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.186931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.187169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.187412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.187436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.187452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.191025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.200281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.200709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.200740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.200757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.201008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.201252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.201276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.201291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.204860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.214124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.214575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.214603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.202 [2024-07-16 00:28:25.214618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.202 [2024-07-16 00:28:25.214848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.202 [2024-07-16 00:28:25.215088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.202 [2024-07-16 00:28:25.215110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.202 [2024-07-16 00:28:25.215123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.202 [2024-07-16 00:28:25.218643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.202 [2024-07-16 00:28:25.228106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.202 [2024-07-16 00:28:25.228542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.202 [2024-07-16 00:28:25.228568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.203 [2024-07-16 00:28:25.228583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.203 [2024-07-16 00:28:25.228811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.203 [2024-07-16 00:28:25.229053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.203 [2024-07-16 00:28:25.229075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.203 [2024-07-16 00:28:25.229088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.203 [2024-07-16 00:28:25.232615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.203 [2024-07-16 00:28:25.242088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.203 [2024-07-16 00:28:25.242544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.203 [2024-07-16 00:28:25.242575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.203 [2024-07-16 00:28:25.242593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.203 [2024-07-16 00:28:25.242832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.203 [2024-07-16 00:28:25.243085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.203 [2024-07-16 00:28:25.243110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.203 [2024-07-16 00:28:25.243126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.203 [2024-07-16 00:28:25.246686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.203 [2024-07-16 00:28:25.255975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.203 [2024-07-16 00:28:25.256432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.203 [2024-07-16 00:28:25.256463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.203 [2024-07-16 00:28:25.256487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.203 [2024-07-16 00:28:25.256726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.203 [2024-07-16 00:28:25.256980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.203 [2024-07-16 00:28:25.257005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.203 [2024-07-16 00:28:25.257021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.203 [2024-07-16 00:28:25.260584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.203 [2024-07-16 00:28:25.269837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.203 [2024-07-16 00:28:25.270300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.203 [2024-07-16 00:28:25.270333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.203 [2024-07-16 00:28:25.270351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.203 [2024-07-16 00:28:25.270589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.203 [2024-07-16 00:28:25.270832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.203 [2024-07-16 00:28:25.270856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.203 [2024-07-16 00:28:25.270871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.203 [2024-07-16 00:28:25.274442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.203 [2024-07-16 00:28:25.283731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.203 [2024-07-16 00:28:25.284194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.203 [2024-07-16 00:28:25.284226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.203 [2024-07-16 00:28:25.284244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.203 [2024-07-16 00:28:25.284481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.203 [2024-07-16 00:28:25.284724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.203 [2024-07-16 00:28:25.284749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.203 [2024-07-16 00:28:25.284764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.203 [2024-07-16 00:28:25.288336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.203 [2024-07-16 00:28:25.297621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.203 [2024-07-16 00:28:25.298088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.203 [2024-07-16 00:28:25.298120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.203 [2024-07-16 00:28:25.298138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.203 [2024-07-16 00:28:25.298377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.203 [2024-07-16 00:28:25.298619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.203 [2024-07-16 00:28:25.298649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.203 [2024-07-16 00:28:25.298666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.203 [2024-07-16 00:28:25.302238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.203 [2024-07-16 00:28:25.311508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.203 [2024-07-16 00:28:25.311954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.203 [2024-07-16 00:28:25.311983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.203 [2024-07-16 00:28:25.312000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.203 [2024-07-16 00:28:25.312238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.312431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.312451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.312463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.315966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.325445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.325906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.325938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.325956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.326194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.326438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.326462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.326478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.330056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.339324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.339919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.339951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.339969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.340207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.340451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.340475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.340490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.344068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.353349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.353819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.353850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.353868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.354118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.354361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.354386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.354401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.357988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.367261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.367854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.367933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.367952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.368190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.368433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.368457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.368472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.372061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.381122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.381575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.381607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.381624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.381862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.382117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.382142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.382158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.385726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.395014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.395480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.395511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.395529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.395772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.396027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.396052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.396068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.399636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.408915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.409378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.409410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.409427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.409666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.409921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.409950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.409966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.413531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.422791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.204 [2024-07-16 00:28:25.423261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.204 [2024-07-16 00:28:25.423289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.204 [2024-07-16 00:28:25.423304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.204 [2024-07-16 00:28:25.423550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.204 [2024-07-16 00:28:25.423743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.204 [2024-07-16 00:28:25.423762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.204 [2024-07-16 00:28:25.423774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.204 [2024-07-16 00:28:25.427334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.204 [2024-07-16 00:28:25.436798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.437264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.437296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.437314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.437552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.437795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.437819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.437839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.441413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.450691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.451138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.451167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.451198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.451445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.451639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.451658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.451670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.455219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.464690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.465142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.465175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.465193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.465432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.465675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.465699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.465715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.469290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.478544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.479001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.479033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.479051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.479288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.479531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.479555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.479570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.483144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.492404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.492864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.492903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.492922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.493160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.493403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.493427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.493442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.497013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.506269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.506725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.506756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.506774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.507023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.507267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.507291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.507306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.510868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.520134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.520564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.520595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.520613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.520850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.521104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.521129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.521145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.524713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.533975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.534438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.534469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.534486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.534724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.534984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.535009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.205 [2024-07-16 00:28:25.535024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.205 [2024-07-16 00:28:25.538589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.205 [2024-07-16 00:28:25.547848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.205 [2024-07-16 00:28:25.548327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.205 [2024-07-16 00:28:25.548359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.205 [2024-07-16 00:28:25.548376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.205 [2024-07-16 00:28:25.548614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.205 [2024-07-16 00:28:25.548857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.205 [2024-07-16 00:28:25.548897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.548915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.552479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.561740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.562201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.562232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.562250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.562488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.562731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.562755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.562770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.566348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.575605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.576043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.576075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.576093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.576331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.576573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.576598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.576613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.580198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.589456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.589890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.589922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.589939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.590177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.590420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.590444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.590460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.594031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.603291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.603745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.603777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.603794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.604045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.604288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.604312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.604327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.607898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.617152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.617588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.617619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.617636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.617874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.618128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.618152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.618168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.621732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.630998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.631449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.631480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.631503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.631742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.631997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.632022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.632037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.635600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.644861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.645321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.645352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.645369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.645607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.645850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.645874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.645902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.649473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.658756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.659228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.659260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.659278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.659517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.659760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.206 [2024-07-16 00:28:25.659784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.206 [2024-07-16 00:28:25.659800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.206 [2024-07-16 00:28:25.663377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.206 [2024-07-16 00:28:25.672637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.206 [2024-07-16 00:28:25.673109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.206 [2024-07-16 00:28:25.673140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.206 [2024-07-16 00:28:25.673158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.206 [2024-07-16 00:28:25.673396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.206 [2024-07-16 00:28:25.673647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.673672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.673687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.677262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.686524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.686950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.686978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.686993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.687227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.687421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.687440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.687452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.690959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.700429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.700905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.700937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.700955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.701193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.701436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.701460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.701475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.705052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.714306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.714737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.714769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.714787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.715037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.715280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.715304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.715320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.718891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.728155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.728611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.728642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.728660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.728910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.729154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.729178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.729193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.732757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.742051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.742627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.742687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.742704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.742956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.743199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.743223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.743239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.746805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.756085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.756636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.756668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.756686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.756936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.757180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.757204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.757220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.760784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.770074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.770697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.770756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.770780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.771031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.771275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.771299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.771314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.774892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.783956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.784579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.784640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.784660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.207 [2024-07-16 00:28:25.784911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.207 [2024-07-16 00:28:25.785160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.207 [2024-07-16 00:28:25.785185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.207 [2024-07-16 00:28:25.785200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.207 [2024-07-16 00:28:25.788767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.207 [2024-07-16 00:28:25.797836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.207 [2024-07-16 00:28:25.798310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.207 [2024-07-16 00:28:25.798342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.207 [2024-07-16 00:28:25.798359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.208 [2024-07-16 00:28:25.798597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.208 [2024-07-16 00:28:25.798840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.208 [2024-07-16 00:28:25.798864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.208 [2024-07-16 00:28:25.798890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.208 [2024-07-16 00:28:25.802512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.489 [2024-07-16 00:28:25.811860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.489 [2024-07-16 00:28:25.812310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.489 [2024-07-16 00:28:25.812343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.489 [2024-07-16 00:28:25.812362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.489 [2024-07-16 00:28:25.812602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.489 [2024-07-16 00:28:25.812845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.489 [2024-07-16 00:28:25.812875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.489 [2024-07-16 00:28:25.812905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.489 [2024-07-16 00:28:25.816572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.489 [2024-07-16 00:28:25.825635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.489 [2024-07-16 00:28:25.826113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.489 [2024-07-16 00:28:25.826153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.489 [2024-07-16 00:28:25.826174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.489 [2024-07-16 00:28:25.826431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.489 [2024-07-16 00:28:25.826768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.489 [2024-07-16 00:28:25.826804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.489 [2024-07-16 00:28:25.826832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.489 [2024-07-16 00:28:25.830993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.489 [2024-07-16 00:28:25.840348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.489 [2024-07-16 00:28:25.840816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.489 [2024-07-16 00:28:25.840862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.489 [2024-07-16 00:28:25.840905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.489 [2024-07-16 00:28:25.841166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.489 [2024-07-16 00:28:25.841412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.489 [2024-07-16 00:28:25.841437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.489 [2024-07-16 00:28:25.841453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.489 [2024-07-16 00:28:25.845047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.489 [2024-07-16 00:28:25.854364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.489 [2024-07-16 00:28:25.854929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.489 [2024-07-16 00:28:25.854962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.489 [2024-07-16 00:28:25.854981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.489 [2024-07-16 00:28:25.855220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.489 [2024-07-16 00:28:25.855463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.489 [2024-07-16 00:28:25.855488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.489 [2024-07-16 00:28:25.855503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.489 [2024-07-16 00:28:25.859094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.489 [2024-07-16 00:28:25.868405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.489 [2024-07-16 00:28:25.868850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.489 [2024-07-16 00:28:25.868893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.489 [2024-07-16 00:28:25.868915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.489 [2024-07-16 00:28:25.869154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.489 [2024-07-16 00:28:25.869397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.489 [2024-07-16 00:28:25.869421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.489 [2024-07-16 00:28:25.869436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.489 [2024-07-16 00:28:25.873047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.489 [2024-07-16 00:28:25.882342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.489 [2024-07-16 00:28:25.882817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.489 [2024-07-16 00:28:25.882849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.489 [2024-07-16 00:28:25.882867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.489 [2024-07-16 00:28:25.883118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.489 [2024-07-16 00:28:25.883361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.489 [2024-07-16 00:28:25.883386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.489 [2024-07-16 00:28:25.883401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.489 [2024-07-16 00:28:25.886981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.489 [2024-07-16 00:28:25.896260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.489 [2024-07-16 00:28:25.896726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.489 [2024-07-16 00:28:25.896758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.489 [2024-07-16 00:28:25.896776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.897027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.897271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.897296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.897311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.900892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:25.910170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:25.910635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:25.910666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:25.910684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.910938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.911182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.911206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.911221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.914783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:25.924068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:25.924599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:25.924649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:25.924667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.924915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.925162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.925187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.925202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.928768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:25.938044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:25.938557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:25.938588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:25.938605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.938843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.939096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.939121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.939136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.942701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:25.951997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:25.952500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:25.952548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:25.952566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.952804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.953058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.953084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.953104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.956673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:25.965936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:25.966385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:25.966413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:25.966428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.966662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.966855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.966874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.966912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.970444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:25.979916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:25.980351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:25.980382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:25.980401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.980638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.980894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.980919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.980935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.984498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:25.993757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:25.994222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:25.994254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:25.994271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:25.994509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:25.994752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:25.994776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:25.994791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:25.998364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:26.007623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:26.008084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:26.008121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:26.008140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:26.008378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:26.008621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:26.008645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:26.008661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:26.012236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:26.021487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:26.021916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:26.021948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:26.021965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:26.022202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:26.022446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:26.022470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:26.022486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:26.026064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:26.035324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:26.035874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:26.035936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:26.035953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.490 [2024-07-16 00:28:26.036191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.490 [2024-07-16 00:28:26.036433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.490 [2024-07-16 00:28:26.036457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.490 [2024-07-16 00:28:26.036473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.490 [2024-07-16 00:28:26.040047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.490 [2024-07-16 00:28:26.049309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.490 [2024-07-16 00:28:26.049785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.490 [2024-07-16 00:28:26.049812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.490 [2024-07-16 00:28:26.049828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.491 [2024-07-16 00:28:26.050080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.491 [2024-07-16 00:28:26.050300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.491 [2024-07-16 00:28:26.050320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.491 [2024-07-16 00:28:26.050332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.491 [2024-07-16 00:28:26.053825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.491 [2024-07-16 00:28:26.063300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.491 [2024-07-16 00:28:26.063819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.491 [2024-07-16 00:28:26.063870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.491 [2024-07-16 00:28:26.063897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.491 [2024-07-16 00:28:26.064137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.491 [2024-07-16 00:28:26.064380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.491 [2024-07-16 00:28:26.064404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.491 [2024-07-16 00:28:26.064419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.491 [2024-07-16 00:28:26.068094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.491 [2024-07-16 00:28:26.077161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.491 [2024-07-16 00:28:26.077595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.491 [2024-07-16 00:28:26.077627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.491 [2024-07-16 00:28:26.077645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.491 [2024-07-16 00:28:26.077896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.491 [2024-07-16 00:28:26.078139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.491 [2024-07-16 00:28:26.078164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.491 [2024-07-16 00:28:26.078180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.491 [2024-07-16 00:28:26.081745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.491 [2024-07-16 00:28:26.091010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.491 [2024-07-16 00:28:26.091464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.491 [2024-07-16 00:28:26.091495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.491 [2024-07-16 00:28:26.091512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.491 [2024-07-16 00:28:26.091749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.491 [2024-07-16 00:28:26.092004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.491 [2024-07-16 00:28:26.092029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.491 [2024-07-16 00:28:26.092045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.491 [2024-07-16 00:28:26.095615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.104914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.105539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.105570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.105588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.105826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.106082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.106106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.106122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.109691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.118752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.119216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.119247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.119265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.119502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.119745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.119769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.119785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.123358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.132619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.133038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.133065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.133081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.133310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.133503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.133522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.133534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.137051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.146457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.146925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.146957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.146981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.147220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.147464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.147488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.147503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.151083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.160340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.160953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.160985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.161003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.161241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.161484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.161508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.161523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.165098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.174362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.174840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.174871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.174901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.175141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.175384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.175408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.175423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.178996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.188260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.188715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.188746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.188764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.189013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.189257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.189287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.189302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.192870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.202150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.202603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.202634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.202652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.202900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.203143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.203167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.203182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.206748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.216014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.216453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.216481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.216496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.216728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.216964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.216986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.216999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.220545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.230025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.230489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.230520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.230538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.230776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.231031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.231056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.231071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.234633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.243898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.244329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.244360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.244378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.244616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.244859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.244893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.244910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.248477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.257742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.258183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.258214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.258232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.258470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.258712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.258737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.258752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.751 [2024-07-16 00:28:26.262324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.751 [2024-07-16 00:28:26.271583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.751 [2024-07-16 00:28:26.272045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.751 [2024-07-16 00:28:26.272077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.751 [2024-07-16 00:28:26.272094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.751 [2024-07-16 00:28:26.272333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.751 [2024-07-16 00:28:26.272575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.751 [2024-07-16 00:28:26.272599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.751 [2024-07-16 00:28:26.272615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.752 [2024-07-16 00:28:26.276193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.752 [2024-07-16 00:28:26.285477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.752 [2024-07-16 00:28:26.285955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.752 [2024-07-16 00:28:26.285988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.752 [2024-07-16 00:28:26.286011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.752 [2024-07-16 00:28:26.286250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.752 [2024-07-16 00:28:26.286493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.752 [2024-07-16 00:28:26.286517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.752 [2024-07-16 00:28:26.286533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.752 [2024-07-16 00:28:26.290117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.752 [2024-07-16 00:28:26.299388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.752 [2024-07-16 00:28:26.299920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.752 [2024-07-16 00:28:26.299952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.752 [2024-07-16 00:28:26.299970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.752 [2024-07-16 00:28:26.300208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.752 [2024-07-16 00:28:26.300450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.752 [2024-07-16 00:28:26.300474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.752 [2024-07-16 00:28:26.300490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.752 [2024-07-16 00:28:26.304069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.752 [2024-07-16 00:28:26.313335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.752 [2024-07-16 00:28:26.313811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.752 [2024-07-16 00:28:26.313838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.752 [2024-07-16 00:28:26.313853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.752 [2024-07-16 00:28:26.314091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.752 [2024-07-16 00:28:26.314328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.752 [2024-07-16 00:28:26.314348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.752 [2024-07-16 00:28:26.314361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.752 [2024-07-16 00:28:26.317862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.752 [2024-07-16 00:28:26.327350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.752 [2024-07-16 00:28:26.327805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.752 [2024-07-16 00:28:26.327852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.752 [2024-07-16 00:28:26.327870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.752 [2024-07-16 00:28:26.328120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.752 [2024-07-16 00:28:26.328363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.752 [2024-07-16 00:28:26.328387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.752 [2024-07-16 00:28:26.328408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.752 [2024-07-16 00:28:26.331988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.752 [2024-07-16 00:28:26.341056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.752 [2024-07-16 00:28:26.341484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.752 [2024-07-16 00:28:26.341514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.752 [2024-07-16 00:28:26.341531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.752 [2024-07-16 00:28:26.341752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.752 [2024-07-16 00:28:26.341991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.752 [2024-07-16 00:28:26.342015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.752 [2024-07-16 00:28:26.342029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.752 [2024-07-16 00:28:26.345286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.752 [2024-07-16 00:28:26.354394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.752 [2024-07-16 00:28:26.354752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.752 [2024-07-16 00:28:26.354792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:34.752 [2024-07-16 00:28:26.354807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:34.752 [2024-07-16 00:28:26.355071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:34.752 [2024-07-16 00:28:26.355295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.752 [2024-07-16 00:28:26.355315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.752 [2024-07-16 00:28:26.355327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.752 [2024-07-16 00:28:26.358277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.012 [2024-07-16 00:28:26.367917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.012 [2024-07-16 00:28:26.368348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.012 [2024-07-16 00:28:26.368377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.012 [2024-07-16 00:28:26.368393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.012 [2024-07-16 00:28:26.368635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.012 [2024-07-16 00:28:26.368853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.012 [2024-07-16 00:28:26.368899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.012 [2024-07-16 00:28:26.368914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.012 [2024-07-16 00:28:26.372338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.012 [2024-07-16 00:28:26.381234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.012 [2024-07-16 00:28:26.381712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.012 [2024-07-16 00:28:26.381754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.012 [2024-07-16 00:28:26.381770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.012 [2024-07-16 00:28:26.382040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.012 [2024-07-16 00:28:26.382265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.382285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.382298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.385337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.394547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.394985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.395014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.395030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.395273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.395490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.395510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.395522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.398496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.407785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.408243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.408285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.408302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.408538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.408738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.408757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.408770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.411744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.421130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.421645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.421672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.421689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.421955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.422156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.422176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.422188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.425143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.434367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.434802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.434830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.434845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.435093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.435309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.435329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.435342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.438340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.447690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.448091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.448119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.448135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.448385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.448584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.448604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.448617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.451700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.461079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.461535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.461563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.461580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.461834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.462102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.462125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.462144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.465186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.474410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.474797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.474824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.474840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.475102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.475319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.475339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.475352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.478379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.487692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.488120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.488149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.488165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.488419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.488619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.488639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.488652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.491627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.500967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.501399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.501427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.501443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.501682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.501896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.501917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.501930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.504940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.514225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.514699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.514730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.514762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.514996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.515222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.515256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.515269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.518250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.527492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.527912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.527940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.527971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.528222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.528422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.528441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.528454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.531508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.540802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.541247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.541276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.541293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.013 [2024-07-16 00:28:26.541547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.013 [2024-07-16 00:28:26.541747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.013 [2024-07-16 00:28:26.541767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.013 [2024-07-16 00:28:26.541779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.013 [2024-07-16 00:28:26.544781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.013 [2024-07-16 00:28:26.554137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.013 [2024-07-16 00:28:26.554602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.013 [2024-07-16 00:28:26.554629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.013 [2024-07-16 00:28:26.554659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.014 [2024-07-16 00:28:26.554901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.014 [2024-07-16 00:28:26.555115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.014 [2024-07-16 00:28:26.555136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.014 [2024-07-16 00:28:26.555149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.014 [2024-07-16 00:28:26.558125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.014 [2024-07-16 00:28:26.567493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.014 [2024-07-16 00:28:26.567990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.014 [2024-07-16 00:28:26.568020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.014 [2024-07-16 00:28:26.568036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.014 [2024-07-16 00:28:26.568280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.014 [2024-07-16 00:28:26.568496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.014 [2024-07-16 00:28:26.568516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.014 [2024-07-16 00:28:26.568528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.014 [2024-07-16 00:28:26.571552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.014 [2024-07-16 00:28:26.580812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.014 [2024-07-16 00:28:26.581243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.014 [2024-07-16 00:28:26.581271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.014 [2024-07-16 00:28:26.581287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.014 [2024-07-16 00:28:26.581519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.014 [2024-07-16 00:28:26.581719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.014 [2024-07-16 00:28:26.581738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.014 [2024-07-16 00:28:26.581751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.014 [2024-07-16 00:28:26.584770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.014 [2024-07-16 00:28:26.594035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.014 [2024-07-16 00:28:26.594567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.014 [2024-07-16 00:28:26.594596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.014 [2024-07-16 00:28:26.594612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.014 [2024-07-16 00:28:26.594866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.014 [2024-07-16 00:28:26.595102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.014 [2024-07-16 00:28:26.595123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.014 [2024-07-16 00:28:26.595137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.014 [2024-07-16 00:28:26.598120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.014 [2024-07-16 00:28:26.607372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.014 [2024-07-16 00:28:26.607870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.014 [2024-07-16 00:28:26.607905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.014 [2024-07-16 00:28:26.607921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.014 [2024-07-16 00:28:26.608166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.014 [2024-07-16 00:28:26.608381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.014 [2024-07-16 00:28:26.608401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.014 [2024-07-16 00:28:26.608414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.014 [2024-07-16 00:28:26.611415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.014 [2024-07-16 00:28:26.620768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.014 [2024-07-16 00:28:26.621187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.014 [2024-07-16 00:28:26.621216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.014 [2024-07-16 00:28:26.621231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.014 [2024-07-16 00:28:26.621445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.014 [2024-07-16 00:28:26.621664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.014 [2024-07-16 00:28:26.621685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.014 [2024-07-16 00:28:26.621700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.273 [2024-07-16 00:28:26.625025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.273 [2024-07-16 00:28:26.634154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.273 [2024-07-16 00:28:26.634670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.273 [2024-07-16 00:28:26.634698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.273 [2024-07-16 00:28:26.634714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.273 [2024-07-16 00:28:26.634968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.273 [2024-07-16 00:28:26.635205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.273 [2024-07-16 00:28:26.635226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.273 [2024-07-16 00:28:26.635255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.273 [2024-07-16 00:28:26.638378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.273 [2024-07-16 00:28:26.647379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.273 [2024-07-16 00:28:26.647851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.273 [2024-07-16 00:28:26.647885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.273 [2024-07-16 00:28:26.647908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.273 [2024-07-16 00:28:26.648148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.273 [2024-07-16 00:28:26.648363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.273 [2024-07-16 00:28:26.648382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.273 [2024-07-16 00:28:26.648395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.273 [2024-07-16 00:28:26.651414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.273 [2024-07-16 00:28:26.660661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.661099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.661126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.661157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.661412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.661612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.661632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.661644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.664626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.673950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.674398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.674439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.674455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.674707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.674932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.674954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.674967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.677882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.687152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.687571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.687598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.687628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.687890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.688115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.688141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.688155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.691173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.700370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.700868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.700902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.700919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.701162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.701378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.701399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.701411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.704395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.713659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.714119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.714147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.714163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.714417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.714616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.714636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.714648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.717658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.726831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.727310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.727338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.727354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.727608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.727807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.727827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.727840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.730835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.740144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.740596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.740624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.740640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.740902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.741128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.741150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.741163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.744171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.753354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.753776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.753803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.753836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.754099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.754316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.754337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.754350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.757356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.766677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.767104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.767132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.767148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.767391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.767605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.767625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.767638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.770647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.780030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.780493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.780519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.780550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.780788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.781034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.781056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.781069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.784064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.793355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.274 [2024-07-16 00:28:26.793810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.274 [2024-07-16 00:28:26.793853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.274 [2024-07-16 00:28:26.793869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.274 [2024-07-16 00:28:26.794108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.274 [2024-07-16 00:28:26.794345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.274 [2024-07-16 00:28:26.794365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.274 [2024-07-16 00:28:26.794378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.274 [2024-07-16 00:28:26.797355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.274 [2024-07-16 00:28:26.806705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.275 [2024-07-16 00:28:26.807200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.275 [2024-07-16 00:28:26.807228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.275 [2024-07-16 00:28:26.807244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.275 [2024-07-16 00:28:26.807483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.275 [2024-07-16 00:28:26.807682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.275 [2024-07-16 00:28:26.807702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.275 [2024-07-16 00:28:26.807714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.275 [2024-07-16 00:28:26.810735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.275 [2024-07-16 00:28:26.819995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.275 [2024-07-16 00:28:26.820443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.275 [2024-07-16 00:28:26.820485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.275 [2024-07-16 00:28:26.820501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.275 [2024-07-16 00:28:26.820751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.275 [2024-07-16 00:28:26.820976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.275 [2024-07-16 00:28:26.820997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.275 [2024-07-16 00:28:26.821016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.275 [2024-07-16 00:28:26.823997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.275 [2024-07-16 00:28:26.833207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.275 [2024-07-16 00:28:26.833658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.275 [2024-07-16 00:28:26.833699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.275 [2024-07-16 00:28:26.833716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.275 [2024-07-16 00:28:26.833983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.275 [2024-07-16 00:28:26.834216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.275 [2024-07-16 00:28:26.834261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.275 [2024-07-16 00:28:26.834274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.275 [2024-07-16 00:28:26.837248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.275 [2024-07-16 00:28:26.846351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.275 [2024-07-16 00:28:26.846755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.275 [2024-07-16 00:28:26.846781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.275 [2024-07-16 00:28:26.846796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.275 [2024-07-16 00:28:26.847049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.275 [2024-07-16 00:28:26.847292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.275 [2024-07-16 00:28:26.847312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.275 [2024-07-16 00:28:26.847325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.275 [2024-07-16 00:28:26.850305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.275 [2024-07-16 00:28:26.859584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.275 [2024-07-16 00:28:26.860022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.275 [2024-07-16 00:28:26.860051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.275 [2024-07-16 00:28:26.860067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.275 [2024-07-16 00:28:26.860320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.275 [2024-07-16 00:28:26.860519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.275 [2024-07-16 00:28:26.860539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.275 [2024-07-16 00:28:26.860551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.275 [2024-07-16 00:28:26.863536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.275 [2024-07-16 00:28:26.872887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.275 [2024-07-16 00:28:26.873343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.275 [2024-07-16 00:28:26.873371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.275 [2024-07-16 00:28:26.873386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.275 [2024-07-16 00:28:26.873614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.275 [2024-07-16 00:28:26.873843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.275 [2024-07-16 00:28:26.873865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.275 [2024-07-16 00:28:26.873888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.275 [2024-07-16 00:28:26.877256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.886546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.886966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.886995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.887012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.887240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.887460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.887480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.887492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.890713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.899776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.900266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.900294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.900325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.900576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.900776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.900795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.900808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.903808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.913158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.913593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.913636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.913652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.913908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.914114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.914136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.914150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.917136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.926347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.926799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.926841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.926858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.927094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.927314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.927335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.927349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.930336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.939590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.940052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.940081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.940098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.940341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.940588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.940609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.940623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.943641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.952956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.953445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.953496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.953512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.953743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.953970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.953992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.954009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.957028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.966230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.966635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.966663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.966680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.966942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.967148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.967168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.967181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.970202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.979559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.979942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.979971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.979987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.980226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.980426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.980447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.980459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.983474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:26.992916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:26.993478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:26.993518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:26.993535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:26.993786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:26.993992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:26.994013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:26.994026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:26.996994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:27.006169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:27.006664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:27.006695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:27.006728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:27.006976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:27.007183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:27.007217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:27.007231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:27.010249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:27.019550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:27.019987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:27.020016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:27.020033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:27.020286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:27.020486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:27.020505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:27.020518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:27.023575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:27.032819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:27.033321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:27.033349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:27.033365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:27.033609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:27.033823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:27.033843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:27.033870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:27.036861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:27.046021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:27.046483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:27.046524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:27.046540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:27.046771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:27.047008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:27.047031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:27.047044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:27.050046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:27.059345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:27.059792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:27.059833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:27.059850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:27.060095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:27.060311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:27.060331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:27.060344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:27.063364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:27.072676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:27.073111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:27.073153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:27.073170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:27.073419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:27.073618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.536 [2024-07-16 00:28:27.073638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.536 [2024-07-16 00:28:27.073651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.536 [2024-07-16 00:28:27.076668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.536 [2024-07-16 00:28:27.085986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.536 [2024-07-16 00:28:27.086455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.536 [2024-07-16 00:28:27.086482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.536 [2024-07-16 00:28:27.086514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.536 [2024-07-16 00:28:27.086750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.536 [2024-07-16 00:28:27.087000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.537 [2024-07-16 00:28:27.087023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.537 [2024-07-16 00:28:27.087037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.537 [2024-07-16 00:28:27.090147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.537 [2024-07-16 00:28:27.099284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.537 [2024-07-16 00:28:27.099732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.537 [2024-07-16 00:28:27.099774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.537 [2024-07-16 00:28:27.099791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.537 [2024-07-16 00:28:27.100047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.537 [2024-07-16 00:28:27.100298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.537 [2024-07-16 00:28:27.100319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.537 [2024-07-16 00:28:27.100332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.537 [2024-07-16 00:28:27.103402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.537 [2024-07-16 00:28:27.112480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.537 [2024-07-16 00:28:27.112900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.537 [2024-07-16 00:28:27.112929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.537 [2024-07-16 00:28:27.112945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.537 [2024-07-16 00:28:27.113201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.537 [2024-07-16 00:28:27.113401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.537 [2024-07-16 00:28:27.113421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.537 [2024-07-16 00:28:27.113433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.537 [2024-07-16 00:28:27.116496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.537 [2024-07-16 00:28:27.125816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.537 [2024-07-16 00:28:27.126302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.537 [2024-07-16 00:28:27.126330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.537 [2024-07-16 00:28:27.126346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.537 [2024-07-16 00:28:27.126590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.537 [2024-07-16 00:28:27.126809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.537 [2024-07-16 00:28:27.126831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.537 [2024-07-16 00:28:27.126845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.537 [2024-07-16 00:28:27.130220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.537 [2024-07-16 00:28:27.139147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.537 [2024-07-16 00:28:27.139664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.537 [2024-07-16 00:28:27.139692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.537 [2024-07-16 00:28:27.139713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.537 [2024-07-16 00:28:27.139968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.537 [2024-07-16 00:28:27.140197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.537 [2024-07-16 00:28:27.140218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.537 [2024-07-16 00:28:27.140231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.537 [2024-07-16 00:28:27.143356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.797 [2024-07-16 00:28:27.152761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.797 [2024-07-16 00:28:27.153195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.797 [2024-07-16 00:28:27.153237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.797 [2024-07-16 00:28:27.153253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.797 [2024-07-16 00:28:27.153524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.797 [2024-07-16 00:28:27.153765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.797 [2024-07-16 00:28:27.153785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.797 [2024-07-16 00:28:27.153798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.797 [2024-07-16 00:28:27.156903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.797 [2024-07-16 00:28:27.166086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.797 [2024-07-16 00:28:27.166578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.797 [2024-07-16 00:28:27.166605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.797 [2024-07-16 00:28:27.166637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.797 [2024-07-16 00:28:27.166899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.797 [2024-07-16 00:28:27.167125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.797 [2024-07-16 00:28:27.167146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.797 [2024-07-16 00:28:27.167160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.797 [2024-07-16 00:28:27.170137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.179282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.179704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.179733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.179764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.180021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.180267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.180291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.180305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.183274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.192525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.193066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.193094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.193110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.193347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.193546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.193567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.193579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.196573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.206291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.206914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.206967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.206985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.207223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.207466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.207490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.207506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.211076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.220152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.220594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.220638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.220656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.220904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.221135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.221171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.221184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.224694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.234176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.234652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.234679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.234709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.234959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.235203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.235227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.235242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.238808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.248073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.248504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.248535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.248553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.248791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.249047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.249072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.249087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.252655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.261917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.262380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.262411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.262429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.262667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.262922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.262948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.262963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.266526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.275783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.276251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.276282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.276300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.276543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.276786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.276810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.276826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.280401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.289658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.290119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.290151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.290168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.290406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.290649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.290673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.290689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.294266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.303527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.303984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.304015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.304033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.304271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.304514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.798 [2024-07-16 00:28:27.304538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.798 [2024-07-16 00:28:27.304554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.798 [2024-07-16 00:28:27.308131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.798 [2024-07-16 00:28:27.317389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.798 [2024-07-16 00:28:27.317844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.798 [2024-07-16 00:28:27.317884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.798 [2024-07-16 00:28:27.317905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.798 [2024-07-16 00:28:27.318144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.798 [2024-07-16 00:28:27.318387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.799 [2024-07-16 00:28:27.318411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.799 [2024-07-16 00:28:27.318432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.799 [2024-07-16 00:28:27.322005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.799 [2024-07-16 00:28:27.331266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.799 [2024-07-16 00:28:27.331696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.799 [2024-07-16 00:28:27.331728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.799 [2024-07-16 00:28:27.331746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.799 [2024-07-16 00:28:27.331996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.799 [2024-07-16 00:28:27.332240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.799 [2024-07-16 00:28:27.332264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.799 [2024-07-16 00:28:27.332280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.799 [2024-07-16 00:28:27.335849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.799 [2024-07-16 00:28:27.345115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.799 [2024-07-16 00:28:27.345649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.799 [2024-07-16 00:28:27.345698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.799 [2024-07-16 00:28:27.345716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.799 [2024-07-16 00:28:27.345965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.799 [2024-07-16 00:28:27.346209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.799 [2024-07-16 00:28:27.346233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.799 [2024-07-16 00:28:27.346249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.799 [2024-07-16 00:28:27.349813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.799 [2024-07-16 00:28:27.359083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.799 [2024-07-16 00:28:27.359532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.799 [2024-07-16 00:28:27.359563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.799 [2024-07-16 00:28:27.359581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.799 [2024-07-16 00:28:27.359818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.799 [2024-07-16 00:28:27.360074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.799 [2024-07-16 00:28:27.360099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.799 [2024-07-16 00:28:27.360114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.799 [2024-07-16 00:28:27.363680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.799 [2024-07-16 00:28:27.372940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.799 [2024-07-16 00:28:27.373380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.799 [2024-07-16 00:28:27.373411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.799 [2024-07-16 00:28:27.373429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.799 [2024-07-16 00:28:27.373667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.799 [2024-07-16 00:28:27.373923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.799 [2024-07-16 00:28:27.373948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.799 [2024-07-16 00:28:27.373964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.799 [2024-07-16 00:28:27.377527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.799 [2024-07-16 00:28:27.386791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.799 [2024-07-16 00:28:27.387235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.799 [2024-07-16 00:28:27.387266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.799 [2024-07-16 00:28:27.387284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.799 [2024-07-16 00:28:27.387521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.799 [2024-07-16 00:28:27.387764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.799 [2024-07-16 00:28:27.387789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.799 [2024-07-16 00:28:27.387805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.799 [2024-07-16 00:28:27.391377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.799 [2024-07-16 00:28:27.400650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.799 [2024-07-16 00:28:27.401125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.799 [2024-07-16 00:28:27.401157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:35.799 [2024-07-16 00:28:27.401175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:35.799 [2024-07-16 00:28:27.401413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:35.799 [2024-07-16 00:28:27.401656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.799 [2024-07-16 00:28:27.401681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.799 [2024-07-16 00:28:27.401696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.799 [2024-07-16 00:28:27.405274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.060 [2024-07-16 00:28:27.414552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.060 [2024-07-16 00:28:27.415013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.060 [2024-07-16 00:28:27.415045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.060 [2024-07-16 00:28:27.415062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.060 [2024-07-16 00:28:27.415301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.060 [2024-07-16 00:28:27.415550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.060 [2024-07-16 00:28:27.415575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.060 [2024-07-16 00:28:27.415591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.060 [2024-07-16 00:28:27.419170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.060 [2024-07-16 00:28:27.428443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.060 [2024-07-16 00:28:27.428925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.060 [2024-07-16 00:28:27.428957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.060 [2024-07-16 00:28:27.428974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.060 [2024-07-16 00:28:27.429212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.060 [2024-07-16 00:28:27.429455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.060 [2024-07-16 00:28:27.429480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.060 [2024-07-16 00:28:27.429495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.060 [2024-07-16 00:28:27.433071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.060 [2024-07-16 00:28:27.442334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.060 [2024-07-16 00:28:27.442946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.060 [2024-07-16 00:28:27.442978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.060 [2024-07-16 00:28:27.442996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.060 [2024-07-16 00:28:27.443234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.060 [2024-07-16 00:28:27.443477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.060 [2024-07-16 00:28:27.443502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.060 [2024-07-16 00:28:27.443517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.060 [2024-07-16 00:28:27.447099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.060 [2024-07-16 00:28:27.456365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.060 [2024-07-16 00:28:27.456872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.060 [2024-07-16 00:28:27.456905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.060 [2024-07-16 00:28:27.456921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.060 [2024-07-16 00:28:27.457155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.060 [2024-07-16 00:28:27.457364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.060 [2024-07-16 00:28:27.457384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.060 [2024-07-16 00:28:27.457396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.460904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.470376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.470965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.470997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.471014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.471252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.471495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.471519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.471535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.475111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.484364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.484818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.484849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.484867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.485116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.485359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.485384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.485399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.488973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.498238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.498697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.498728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.498746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.498997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.499242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.499266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.499281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.502846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.512111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.512541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.512578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.512596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.512835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.513089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.513115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.513130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.516700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.525972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.526414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.526445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.526462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.526700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.526958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.526983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.526999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.530563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.539837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.540288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.540317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.540333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.540580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.540774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.540793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.540805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.544331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.553816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.554351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.554403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.554421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.554659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.554919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.554945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.554960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.558528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.567792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.568253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.568285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.568303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.568541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.568785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.568809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.568825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.572393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.581644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.582059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.582090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.582107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.582345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.582588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.582612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.582628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.586194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.595670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.596196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.596224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.596239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.596481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.061 [2024-07-16 00:28:27.596675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.061 [2024-07-16 00:28:27.596694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.061 [2024-07-16 00:28:27.596706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.061 [2024-07-16 00:28:27.600281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.061 [2024-07-16 00:28:27.609563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.061 [2024-07-16 00:28:27.610004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.061 [2024-07-16 00:28:27.610035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.061 [2024-07-16 00:28:27.610053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.061 [2024-07-16 00:28:27.610292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.062 [2024-07-16 00:28:27.610535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.062 [2024-07-16 00:28:27.610559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.062 [2024-07-16 00:28:27.610575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.062 [2024-07-16 00:28:27.614158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.062 [2024-07-16 00:28:27.623427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.062 [2024-07-16 00:28:27.623894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.062 [2024-07-16 00:28:27.623926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.062 [2024-07-16 00:28:27.623944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.062 [2024-07-16 00:28:27.624181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.062 [2024-07-16 00:28:27.624425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.062 [2024-07-16 00:28:27.624449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.062 [2024-07-16 00:28:27.624464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.062 [2024-07-16 00:28:27.628038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.062 [2024-07-16 00:28:27.637299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.062 [2024-07-16 00:28:27.637759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.062 [2024-07-16 00:28:27.637791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.062 [2024-07-16 00:28:27.637809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.062 [2024-07-16 00:28:27.638057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.062 [2024-07-16 00:28:27.638301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.062 [2024-07-16 00:28:27.638326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.062 [2024-07-16 00:28:27.638341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.062 [2024-07-16 00:28:27.641908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.062 [2024-07-16 00:28:27.651172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.062 [2024-07-16 00:28:27.651626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.062 [2024-07-16 00:28:27.651657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.062 [2024-07-16 00:28:27.651680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.062 [2024-07-16 00:28:27.651928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.062 [2024-07-16 00:28:27.652171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.062 [2024-07-16 00:28:27.652196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.062 [2024-07-16 00:28:27.652211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.062 [2024-07-16 00:28:27.655771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.062 [2024-07-16 00:28:27.665050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.062 [2024-07-16 00:28:27.665487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.062 [2024-07-16 00:28:27.665518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.062 [2024-07-16 00:28:27.665536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.062 [2024-07-16 00:28:27.665774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.062 [2024-07-16 00:28:27.666031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.062 [2024-07-16 00:28:27.666056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.062 [2024-07-16 00:28:27.666072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.062 [2024-07-16 00:28:27.669657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.323 [2024-07-16 00:28:27.678963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.323 [2024-07-16 00:28:27.679401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.323 [2024-07-16 00:28:27.679433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.323 [2024-07-16 00:28:27.679451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.323 [2024-07-16 00:28:27.679688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.323 [2024-07-16 00:28:27.679944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.323 [2024-07-16 00:28:27.679969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.323 [2024-07-16 00:28:27.679985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.323 [2024-07-16 00:28:27.683551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.323 [2024-07-16 00:28:27.692835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.323 [2024-07-16 00:28:27.693302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.323 [2024-07-16 00:28:27.693334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.323 [2024-07-16 00:28:27.693352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.323 [2024-07-16 00:28:27.693589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.323 [2024-07-16 00:28:27.693833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.323 [2024-07-16 00:28:27.693864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.323 [2024-07-16 00:28:27.693889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.323 [2024-07-16 00:28:27.697460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.323 [2024-07-16 00:28:27.706730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.323 [2024-07-16 00:28:27.707168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.323 [2024-07-16 00:28:27.707199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.323 [2024-07-16 00:28:27.707217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.323 [2024-07-16 00:28:27.707455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.323 [2024-07-16 00:28:27.707698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.323 [2024-07-16 00:28:27.707723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.323 [2024-07-16 00:28:27.707738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.323 [2024-07-16 00:28:27.711315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.323 [2024-07-16 00:28:27.720582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.323 [2024-07-16 00:28:27.721046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.323 [2024-07-16 00:28:27.721077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.323 [2024-07-16 00:28:27.721095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.323 [2024-07-16 00:28:27.721333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.323 [2024-07-16 00:28:27.721575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.323 [2024-07-16 00:28:27.721600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.323 [2024-07-16 00:28:27.721615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.323 [2024-07-16 00:28:27.725196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.323 [2024-07-16 00:28:27.734458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.323 [2024-07-16 00:28:27.734904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.323 [2024-07-16 00:28:27.734936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.323 [2024-07-16 00:28:27.734954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.323 [2024-07-16 00:28:27.735193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.323 [2024-07-16 00:28:27.735435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.323 [2024-07-16 00:28:27.735459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.323 [2024-07-16 00:28:27.735475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.323 [2024-07-16 00:28:27.739052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.323 [2024-07-16 00:28:27.748312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.323 [2024-07-16 00:28:27.748778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.323 [2024-07-16 00:28:27.748809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.323 [2024-07-16 00:28:27.748827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.323 [2024-07-16 00:28:27.749075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.323 [2024-07-16 00:28:27.749319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.323 [2024-07-16 00:28:27.749344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.749359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.752936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.762200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.762653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.762684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.762703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.762953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.763198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.763223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.763238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.766806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.776085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.776669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.776720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.776738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.776988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.777231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.777255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.777271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.780837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.790116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.790570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.790601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.790620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.790863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.791118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.791143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.791158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.794726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.803999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.804466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.804497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.804514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.804752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.805014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.805039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.805054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.808625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.817903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.818333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.818365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.818383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.818621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.818864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.818907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.818928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.822497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.831768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.832241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.832272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.832289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.832527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.832771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.832795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.832816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.836393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.845668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.846102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.846134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.846152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.846389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.846633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.846657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.846672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.850254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.859509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.859964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.859995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.860013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.860250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.860494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.860518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.860533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.864105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.873367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.873823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.873854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.873872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.874123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.874366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.874391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.874406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.877976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.887231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.887850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.887923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.887942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.888179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.888423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.888447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.324 [2024-07-16 00:28:27.888463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.324 [2024-07-16 00:28:27.892037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.324 [2024-07-16 00:28:27.901094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.324 [2024-07-16 00:28:27.901712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.324 [2024-07-16 00:28:27.901773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.324 [2024-07-16 00:28:27.901790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.324 [2024-07-16 00:28:27.902040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.324 [2024-07-16 00:28:27.902283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.324 [2024-07-16 00:28:27.902307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.325 [2024-07-16 00:28:27.902323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.325 [2024-07-16 00:28:27.905895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.325 [2024-07-16 00:28:27.914950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.325 [2024-07-16 00:28:27.915408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.325 [2024-07-16 00:28:27.915439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.325 [2024-07-16 00:28:27.915457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.325 [2024-07-16 00:28:27.915694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.325 [2024-07-16 00:28:27.915950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.325 [2024-07-16 00:28:27.915975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.325 [2024-07-16 00:28:27.915991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.325 [2024-07-16 00:28:27.919554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.325 [2024-07-16 00:28:27.928816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.325 [2024-07-16 00:28:27.929265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.325 [2024-07-16 00:28:27.929296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.325 [2024-07-16 00:28:27.929314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.325 [2024-07-16 00:28:27.929552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.325 [2024-07-16 00:28:27.929801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.325 [2024-07-16 00:28:27.929825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.325 [2024-07-16 00:28:27.929840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.325 [2024-07-16 00:28:27.933423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.585 [2024-07-16 00:28:27.942703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.585 [2024-07-16 00:28:27.943158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.585 [2024-07-16 00:28:27.943190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.585 [2024-07-16 00:28:27.943208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.585 [2024-07-16 00:28:27.943445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.585 [2024-07-16 00:28:27.943689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.585 [2024-07-16 00:28:27.943713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.585 [2024-07-16 00:28:27.943728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.585 [2024-07-16 00:28:27.947306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.585 [2024-07-16 00:28:27.956574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.585 [2024-07-16 00:28:27.957030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.585 [2024-07-16 00:28:27.957062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.585 [2024-07-16 00:28:27.957080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.585 [2024-07-16 00:28:27.957318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.585 [2024-07-16 00:28:27.957561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.585 [2024-07-16 00:28:27.957585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.585 [2024-07-16 00:28:27.957601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.585 [2024-07-16 00:28:27.961178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.585 [2024-07-16 00:28:27.970444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.585 [2024-07-16 00:28:27.970901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.585 [2024-07-16 00:28:27.970932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.585 [2024-07-16 00:28:27.970950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.585 [2024-07-16 00:28:27.971188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.585 [2024-07-16 00:28:27.971430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.585 [2024-07-16 00:28:27.971454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.585 [2024-07-16 00:28:27.971470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.585 [2024-07-16 00:28:27.975047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.585 [2024-07-16 00:28:27.984303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.585 [2024-07-16 00:28:27.984933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.585 [2024-07-16 00:28:27.984964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.585 [2024-07-16 00:28:27.984982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.585 [2024-07-16 00:28:27.985220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.585 [2024-07-16 00:28:27.985463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.585 [2024-07-16 00:28:27.985487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.585 [2024-07-16 00:28:27.985503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.585 [2024-07-16 00:28:27.989071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.585 [2024-07-16 00:28:27.998326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.585 [2024-07-16 00:28:27.998779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.585 [2024-07-16 00:28:27.998810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.585 [2024-07-16 00:28:27.998827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.585 [2024-07-16 00:28:27.999077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.585 [2024-07-16 00:28:27.999321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.585 [2024-07-16 00:28:27.999345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.585 [2024-07-16 00:28:27.999360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.585 [2024-07-16 00:28:28.002931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.585 [2024-07-16 00:28:28.012188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.585 [2024-07-16 00:28:28.012642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.585 [2024-07-16 00:28:28.012673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.585 [2024-07-16 00:28:28.012691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.585 [2024-07-16 00:28:28.012940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.013184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.013208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.013223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.016787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.026056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.026504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.026535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.026560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.026799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.027055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.027081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.027096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.030659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.039920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.040372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.040403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.040421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.040659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.040913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.040948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.040964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.044530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.053905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.054365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.054396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.054414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.054652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.054908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.054933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.054948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.058513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.067785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.068255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.068287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.068305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.068542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.068785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.068815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.068832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.072413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.081690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.082134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.082166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.082183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.082421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.082664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.082690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.082707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.086288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.095563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.096032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.096064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.096081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.096320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.096563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.096587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.096603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.100207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2386149 Killed "${NVMF_APP[@]}" "$@" 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2387139 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2387139 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2387139 ']' 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.586 00:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.586 [2024-07-16 00:28:28.109489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.109898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.109930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.109948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.110186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.110434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.110458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.110475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.114057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.123329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.123757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.123788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.123806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.124058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.124301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.124326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.124341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.586 [2024-07-16 00:28:28.127918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.586 [2024-07-16 00:28:28.137196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.586 [2024-07-16 00:28:28.137651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.586 [2024-07-16 00:28:28.137683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.586 [2024-07-16 00:28:28.137701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.586 [2024-07-16 00:28:28.137950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.586 [2024-07-16 00:28:28.138194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.586 [2024-07-16 00:28:28.138219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.586 [2024-07-16 00:28:28.138234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.587 [2024-07-16 00:28:28.141801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.587 [2024-07-16 00:28:28.151099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.587 [2024-07-16 00:28:28.151581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.587 [2024-07-16 00:28:28.151630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.587 [2024-07-16 00:28:28.151648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.587 [2024-07-16 00:28:28.151896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.587 [2024-07-16 00:28:28.152140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.587 [2024-07-16 00:28:28.152164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.587 [2024-07-16 00:28:28.152187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.587 [2024-07-16 00:28:28.155755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.587 [2024-07-16 00:28:28.156536] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:36.587 [2024-07-16 00:28:28.156613] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.587 [2024-07-16 00:28:28.165045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.587 [2024-07-16 00:28:28.165501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.587 [2024-07-16 00:28:28.165532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.587 [2024-07-16 00:28:28.165550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.587 [2024-07-16 00:28:28.165788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.587 [2024-07-16 00:28:28.166042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.587 [2024-07-16 00:28:28.166067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.587 [2024-07-16 00:28:28.166082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.587 [2024-07-16 00:28:28.169651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.587 [2024-07-16 00:28:28.178942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.587 [2024-07-16 00:28:28.179397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.587 [2024-07-16 00:28:28.179428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.587 [2024-07-16 00:28:28.179446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.587 [2024-07-16 00:28:28.179684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.587 [2024-07-16 00:28:28.179939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.587 [2024-07-16 00:28:28.179964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.587 [2024-07-16 00:28:28.179979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.587 [2024-07-16 00:28:28.183540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.587 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.587 [2024-07-16 00:28:28.192777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.587 [2024-07-16 00:28:28.193246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.587 [2024-07-16 00:28:28.193278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.587 [2024-07-16 00:28:28.193296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.587 [2024-07-16 00:28:28.193534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.587 [2024-07-16 00:28:28.193778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.587 [2024-07-16 00:28:28.193802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.587 [2024-07-16 00:28:28.193817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.587 [2024-07-16 00:28:28.197426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.846 [2024-07-16 00:28:28.206691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.846 [2024-07-16 00:28:28.207112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.846 [2024-07-16 00:28:28.207144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.846 [2024-07-16 00:28:28.207168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.846 [2024-07-16 00:28:28.207407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.846 [2024-07-16 00:28:28.207650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.846 [2024-07-16 00:28:28.207675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.846 [2024-07-16 00:28:28.207691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.846 [2024-07-16 00:28:28.211262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.846 [2024-07-16 00:28:28.220727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.846 [2024-07-16 00:28:28.221143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.846 [2024-07-16 00:28:28.221174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.846 [2024-07-16 00:28:28.221192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.846 [2024-07-16 00:28:28.221429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.846 [2024-07-16 00:28:28.221673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.846 [2024-07-16 00:28:28.221697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.846 [2024-07-16 00:28:28.221712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.846 [2024-07-16 00:28:28.225282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.846 [2024-07-16 00:28:28.229141] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:36.846 [2024-07-16 00:28:28.234773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.846 [2024-07-16 00:28:28.235273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.846 [2024-07-16 00:28:28.235307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.846 [2024-07-16 00:28:28.235333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.846 [2024-07-16 00:28:28.235574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.846 [2024-07-16 00:28:28.235820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.846 [2024-07-16 00:28:28.235845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.846 [2024-07-16 00:28:28.235872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.846 [2024-07-16 00:28:28.239460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.846 [2024-07-16 00:28:28.248777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.846 [2024-07-16 00:28:28.249434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.846 [2024-07-16 00:28:28.249489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.846 [2024-07-16 00:28:28.249512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.846 [2024-07-16 00:28:28.249767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.846 [2024-07-16 00:28:28.250038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.846 [2024-07-16 00:28:28.250064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.846 [2024-07-16 00:28:28.250083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.846 [2024-07-16 00:28:28.253646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.846 [2024-07-16 00:28:28.262707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.846 [2024-07-16 00:28:28.263205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.846 [2024-07-16 00:28:28.263238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.846 [2024-07-16 00:28:28.263256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.846 [2024-07-16 00:28:28.263496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.263740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.263765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.263782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.267350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.276618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.277081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.277114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.277133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.277371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.277615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.277641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.277670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.281243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.290501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.290978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.291012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.291031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.291269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.291520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.291544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.291560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.295135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.304425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.305034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.305077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.305100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.305349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.305598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.305623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.305642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.309213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.318475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.318960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.318992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.319011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.319249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.319493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.319518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.319533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.323109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.332359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.332848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.332887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.332907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.333146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.333389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.333414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.333430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.337005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.346258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.346696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.346728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.346745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.346996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.347239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.347263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.347279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.350157] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.847 [2024-07-16 00:28:28.350195] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.847 [2024-07-16 00:28:28.350211] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.847 [2024-07-16 00:28:28.350225] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.847 [2024-07-16 00:28:28.350236] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.847 [2024-07-16 00:28:28.350297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.847 [2024-07-16 00:28:28.350354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.847 [2024-07-16 00:28:28.350358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.847 [2024-07-16 00:28:28.350847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.360135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.360771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.360816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.360838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.361108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.361357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.361382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.361412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.365013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.374106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.374795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.374852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.374896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.375167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.375418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.375444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.375464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.379071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.388173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.388871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.388938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.388963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.389220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.847 [2024-07-16 00:28:28.389471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.847 [2024-07-16 00:28:28.389496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.847 [2024-07-16 00:28:28.389516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.847 [2024-07-16 00:28:28.393109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.847 [2024-07-16 00:28:28.402199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.847 [2024-07-16 00:28:28.402813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.847 [2024-07-16 00:28:28.402856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.847 [2024-07-16 00:28:28.402889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.847 [2024-07-16 00:28:28.403142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.848 [2024-07-16 00:28:28.403390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.848 [2024-07-16 00:28:28.403415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.848 [2024-07-16 00:28:28.403434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.848 [2024-07-16 00:28:28.407006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.848 [2024-07-16 00:28:28.416282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.848 [2024-07-16 00:28:28.416914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.848 [2024-07-16 00:28:28.416958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.848 [2024-07-16 00:28:28.416979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.848 [2024-07-16 00:28:28.417227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.848 [2024-07-16 00:28:28.417475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.848 [2024-07-16 00:28:28.417500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.848 [2024-07-16 00:28:28.417519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.848 [2024-07-16 00:28:28.421102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.848 [2024-07-16 00:28:28.430183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.848 [2024-07-16 00:28:28.430805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.848 [2024-07-16 00:28:28.430848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.848 [2024-07-16 00:28:28.430870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.848 [2024-07-16 00:28:28.431126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.848 [2024-07-16 00:28:28.431375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.848 [2024-07-16 00:28:28.431401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.848 [2024-07-16 00:28:28.431420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.848 [2024-07-16 00:28:28.435028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.848 [2024-07-16 00:28:28.444087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.848 [2024-07-16 00:28:28.444565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.848 [2024-07-16 00:28:28.444597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.848 [2024-07-16 00:28:28.444615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:36.848 [2024-07-16 00:28:28.444854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:36.848 [2024-07-16 00:28:28.445106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.848 [2024-07-16 00:28:28.445132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.848 [2024-07-16 00:28:28.445148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.848 [2024-07-16 00:28:28.448711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.848 [2024-07-16 00:28:28.457996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.848 [2024-07-16 00:28:28.458441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.848 [2024-07-16 00:28:28.458482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:36.848 [2024-07-16 00:28:28.458500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.458753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.459007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.459032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.459048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.462613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.471902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.472359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.472390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.472408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.472647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.472900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.472925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.472940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.476502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.485790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.486263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.486296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.486313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.486552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.486795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.486820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.486834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.490405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.499678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.500137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.500169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.500186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.500424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.500667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.500691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.500713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.504288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.513551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.513990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.514022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.514039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.514278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.514521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.514545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.514561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.518130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.527392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.527826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.527857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.527884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.528125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.528368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.528393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.528408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.531978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.541250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.541687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.541719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.541737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.541985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.542229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.542254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.542269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.545833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.555111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.555542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.555579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.555598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.555836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.556089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.556114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.556130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.559689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.568962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.569413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.569444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.569462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.569700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.569953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.569978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.569994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.573557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.582813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.583287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.583318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.583336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.583574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.583817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.583841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.583856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.587427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.107 [2024-07-16 00:28:28.596689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.107 [2024-07-16 00:28:28.597109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-16 00:28:28.597140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.107 [2024-07-16 00:28:28.597158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.107 [2024-07-16 00:28:28.597396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.107 [2024-07-16 00:28:28.597646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.107 [2024-07-16 00:28:28.597670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.107 [2024-07-16 00:28:28.597685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.107 [2024-07-16 00:28:28.601258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.610521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.610975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.611008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.611026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.611264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.611507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.611531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.611547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.615115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.624383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.624813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.624844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.624871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.625119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.625362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.625387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.625402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.628973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.638236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.638705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.638736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.638753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.639002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.639246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.639270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.639285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.642852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.652123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.652575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.652606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.652624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.652863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.653116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.653140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.653155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.656717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.665993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.666445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.666477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.666495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.666733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.666988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.667014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.667029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.670590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.679856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.680298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.680331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.680349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.680587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.680830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.680855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.680870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.684440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.693702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.694176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.694207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.694231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.694471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.694714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.694739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.694754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.698321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-16 00:28:28.707581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-16 00:28:28.708018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-16 00:28:28.708051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-16 00:28:28.708069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.108 [2024-07-16 00:28:28.708308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.108 [2024-07-16 00:28:28.708551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-16 00:28:28.708575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-16 00:28:28.708590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-16 00:28:28.712158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.721430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.721892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.721923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.721941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.722180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.722423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.722447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.722462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.726036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.735312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.735713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.735745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.735763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.736013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.736257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.736287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.736304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.739867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.749359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.749827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.749858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.749889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.750131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.750375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.750399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.750415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.753987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.763262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.763693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.763724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.763743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.763992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.764236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.764260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.764275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.767839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.777107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.777564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.777595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.777613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.777851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.778104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.778129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.778145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.781714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.791010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.791468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.791500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.791518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.791756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.792010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.792035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.792050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.795620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.804893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.805367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.805398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.805416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.805654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.805908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.805934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.805950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.809516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.818797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.819264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.819296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.819313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.819552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.819795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.819819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.819834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.823407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.832677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.833136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.833168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.833186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.833429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.833673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.833697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.833712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.837287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.846556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.847020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.847053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.847071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.847309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.847553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.847577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.847592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.851176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.860454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.860898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.860930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.860948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.861186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.861430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.861455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.861470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.865045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.874322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.874773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.874806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.874824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.875072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.875316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.875341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.875362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.878938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.888234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.888669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.888700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.888718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.888966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.889210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.889235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.889250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.892813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.902080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.902537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.902568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.902586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.902824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.903077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.903103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.903118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.906679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.915961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.916424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.916455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.916473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.916711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.916965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.916990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.917005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.920567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.929830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.930283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.930315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.930333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.930572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.930815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.930840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.930855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.934425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.943686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.944151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.944183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.944200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.944438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.944681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.944705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-16 00:28:28.944720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-16 00:28:28.948291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-16 00:28:28.957560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-16 00:28:28.958014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-16 00:28:28.958047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-16 00:28:28.958066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.368 [2024-07-16 00:28:28.958304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.368 [2024-07-16 00:28:28.958548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-16 00:28:28.958572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-16 00:28:28.958587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-16 00:28:28.962162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-16 00:28:28.971421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-16 00:28:28.971886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-16 00:28:28.971918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-16 00:28:28.971936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.369 [2024-07-16 00:28:28.972179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.369 [2024-07-16 00:28:28.972423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-16 00:28:28.972448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-16 00:28:28.972463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-16 00:28:28.976038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.627 [2024-07-16 00:28:28.985307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.627 [2024-07-16 00:28:28.985704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.627 [2024-07-16 00:28:28.985737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.627 [2024-07-16 00:28:28.985755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.627 [2024-07-16 00:28:28.986008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.627 [2024-07-16 00:28:28.986254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.627 [2024-07-16 00:28:28.986278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.627 [2024-07-16 00:28:28.986294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.627 [2024-07-16 00:28:28.989866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.627 [2024-07-16 00:28:28.999152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.627 [2024-07-16 00:28:28.999585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.627 [2024-07-16 00:28:28.999617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.627 [2024-07-16 00:28:28.999635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.627 [2024-07-16 00:28:28.999872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.627 [2024-07-16 00:28:29.000125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.627 [2024-07-16 00:28:29.000150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.627 [2024-07-16 00:28:29.000166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.627 [2024-07-16 00:28:29.003733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.627 [2024-07-16 00:28:29.013013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.627 [2024-07-16 00:28:29.013439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.627 [2024-07-16 00:28:29.013471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.627 [2024-07-16 00:28:29.013489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.627 [2024-07-16 00:28:29.013726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.013979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.014004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.014025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.017593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 [2024-07-16 00:28:29.026866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.027313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.027352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.027370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.027608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.027851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.027885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.027904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.031472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 [2024-07-16 00:28:29.040743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.041186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.041218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.041236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.041474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.041718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.041743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.041758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.045331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 [2024-07-16 00:28:29.054601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.055065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.055097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.055115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.055353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.055596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.055621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.055637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.059210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 [2024-07-16 00:28:29.068477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.068943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.068980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.068999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.069237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.069480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.069504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.069519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.073174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 [2024-07-16 00:28:29.082133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.082557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.082585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.082602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.082830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.083071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.083095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.083109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.086355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.628 [2024-07-16 00:28:29.095692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.096100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.096130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.096146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.096389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.096596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.096617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.096629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.099900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 [2024-07-16 00:28:29.109237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.109631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.109659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.109680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.109927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.110147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.110184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.110197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.113344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.628 [2024-07-16 00:28:29.121114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.628 [2024-07-16 00:28:29.122705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.123140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.123169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.123185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.123426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.123632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.123652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.123666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.126978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.628 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.628 [2024-07-16 00:28:29.136300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.136723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.628 [2024-07-16 00:28:29.136751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.628 [2024-07-16 00:28:29.136767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.628 [2024-07-16 00:28:29.137036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.628 [2024-07-16 00:28:29.137275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.628 [2024-07-16 00:28:29.137296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.628 [2024-07-16 00:28:29.137308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.628 [2024-07-16 00:28:29.140412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.628 [2024-07-16 00:28:29.149755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.628 [2024-07-16 00:28:29.150350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.629 [2024-07-16 00:28:29.150386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.629 [2024-07-16 00:28:29.150405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.629 [2024-07-16 00:28:29.150645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.629 [2024-07-16 00:28:29.150855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.629 [2024-07-16 00:28:29.150902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.629 [2024-07-16 00:28:29.150920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.629 [2024-07-16 00:28:29.154092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.629 [2024-07-16 00:28:29.163385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.629 [2024-07-16 00:28:29.163943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.629 [2024-07-16 00:28:29.163985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.629 [2024-07-16 00:28:29.164005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.629 [2024-07-16 00:28:29.164243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.629 Malloc0 00:24:37.629 [2024-07-16 00:28:29.164483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.629 [2024-07-16 00:28:29.164506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.629 [2024-07-16 00:28:29.164524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.629 [2024-07-16 00:28:29.167812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.629 [2024-07-16 00:28:29.177070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.629 [2024-07-16 00:28:29.177588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.629 [2024-07-16 00:28:29.177617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488fe0 with addr=10.0.0.2, port=4420 00:24:37.629 [2024-07-16 00:28:29.177633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488fe0 is same with the state(5) to be set 00:24:37.629 [2024-07-16 00:28:29.177874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488fe0 (9): Bad file descriptor 00:24:37.629 [2024-07-16 00:28:29.178117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.629 [2024-07-16 00:28:29.178149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.629 [2024-07-16 00:28:29.178164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.629 [2024-07-16 00:28:29.181422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.629 [2024-07-16 00:28:29.184278] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.629 00:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2386449 00:24:37.629 [2024-07-16 00:28:29.190577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.887 [2024-07-16 00:28:29.351982] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:46.004 00:24:46.004 Latency(us) 00:24:46.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.004 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:46.004 Verification LBA range: start 0x0 length 0x4000 00:24:46.004 Nvme1n1 : 15.01 5928.48 23.16 10783.11 0.00 7633.88 1165.08 16408.27 00:24:46.004 =================================================================================================================== 00:24:46.004 Total : 5928.48 23.16 10783.11 0.00 7633.88 1165.08 16408.27 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.262 rmmod nvme_tcp 00:24:46.262 rmmod nvme_fabrics 00:24:46.262 rmmod nvme_keyring 00:24:46.262 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2387139 ']' 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2387139 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2387139 ']' 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2387139 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2387139 00:24:46.519 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:46.520 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:46.520 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2387139' 00:24:46.520 killing process with pid 2387139 00:24:46.520 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2387139 00:24:46.520 00:28:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2387139 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.779 00:28:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.682 00:28:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.682 00:24:48.682 real 0m23.222s 00:24:48.682 user 1m3.197s 00:24:48.682 sys 0m4.094s 00:24:48.682 00:28:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:48.683 00:28:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:48.683 ************************************ 00:24:48.683 END TEST nvmf_bdevperf 00:24:48.683 ************************************ 00:24:48.683 00:28:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:48.683 00:28:40 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:48.683 00:28:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:48.683 00:28:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.683 00:28:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.683 ************************************ 00:24:48.683 START TEST nvmf_target_disconnect 00:24:48.683 ************************************ 00:24:48.683 00:28:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:48.940 * Looking for test storage... 00:24:48.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:48.940 00:28:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.941 00:28:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:50.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:50.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:50.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:50.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:50.839 00:24:50.839 --- 10.0.0.2 ping statistics --- 00:24:50.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.839 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:50.839 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:24:50.840 00:24:50.840 --- 10.0.0.1 ping statistics --- 00:24:50.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.840 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:50.840 ************************************ 00:24:50.840 START TEST nvmf_target_disconnect_tc1 00:24:50.840 ************************************ 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:50.840 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:51.097 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.097 [2024-07-16 00:28:42.539466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.097 [2024-07-16 00:28:42.539532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1335340 with addr=10.0.0.2, port=4420 00:24:51.097 [2024-07-16 00:28:42.539575] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:51.097 [2024-07-16 00:28:42.539603] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:51.097 [2024-07-16 00:28:42.539620] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:51.098 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:51.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:51.098 Initializing NVMe Controllers 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:51.098 00:24:51.098 real 0m0.103s 00:24:51.098 user 0m0.042s 00:24:51.098 sys 0m0.061s 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.098 ************************************ 00:24:51.098 END TEST nvmf_target_disconnect_tc1 00:24:51.098 ************************************ 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:51.098 ************************************ 00:24:51.098 START TEST nvmf_target_disconnect_tc2 00:24:51.098 ************************************ 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2390271 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2390271 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2390271 ']' 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.098 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.098 [2024-07-16 00:28:42.649962] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:51.098 [2024-07-16 00:28:42.650053] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.098 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.356 [2024-07-16 00:28:42.715043] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.356 [2024-07-16 00:28:42.828125] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.356 [2024-07-16 00:28:42.828204] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.356 [2024-07-16 00:28:42.828218] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.356 [2024-07-16 00:28:42.828245] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.356 [2024-07-16 00:28:42.828256] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.356 [2024-07-16 00:28:42.828348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:51.356 [2024-07-16 00:28:42.828412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:51.356 [2024-07-16 00:28:42.828443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:51.356 [2024-07-16 00:28:42.828447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:51.356 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.356 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:51.356 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.356 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.356 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.614 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.614 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.614 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.614 00:28:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.614 Malloc0 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.615 [2024-07-16 00:28:43.010030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.615 [2024-07-16 00:28:43.038337] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2390416 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:51.615 00:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:51.615 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.526 00:28:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2390271 00:24:53.526 00:28:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 [2024-07-16 00:28:45.065619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 [2024-07-16 00:28:45.066062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Read completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.526 starting I/O failed 00:24:53.526 Write completed with error (sct=0, sc=8) 00:24:53.527 starting I/O failed 00:24:53.527 Write completed with error (sct=0, sc=8) 00:24:53.527 starting I/O failed 00:24:53.527 Read completed with error (sct=0, sc=8) 00:24:53.527 starting I/O failed 00:24:53.527 Read completed with error (sct=0, sc=8) 00:24:53.527 starting I/O failed 00:24:53.527 Read completed with error (sct=0, sc=8) 00:24:53.527 starting I/O failed 00:24:53.527 Read completed with error (sct=0, sc=8) 00:24:53.527 starting I/O failed 00:24:53.527 Write completed with error (sct=0, sc=8) 00:24:53.527 starting I/O failed 00:24:53.527 [2024-07-16 00:28:45.066352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.527 [2024-07-16 00:28:45.066630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.066662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.066811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.066838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.066995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.067034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.067186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.067212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.067394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.067419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.067603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.067629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.067804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.067830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.067982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.068016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.068173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.068198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.068385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.068411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.068579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.068608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.068809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.068837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.069019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.069049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.069195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.069220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.069395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.069421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.069568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.069594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.069752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.069778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.069954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.069980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.070129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.070154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.070337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.070363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.070558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.070587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.070809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.070834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.071019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.071045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.071200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.071226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.071373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.071398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.071553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.071597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.071772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.071797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.071987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.072013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.072182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.072210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.072379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.072404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.072578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.072621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.072810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.072838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.073020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.073046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.073204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.073229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.073408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.073434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.073615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.073668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.073854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.073973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.074136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.527 [2024-07-16 00:28:45.074162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.527 qpair failed and we were unable to recover it. 00:24:53.527 [2024-07-16 00:28:45.074339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.074364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.074512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.074537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.074735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.074763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.074930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.074956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.075113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.075138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.075319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.075345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.075564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.075810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.075838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.076062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.076088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.076290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.076316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.076513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.076541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.076746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.076771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.076949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.076976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.077124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.077150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.077319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.077344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.077503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.077544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.077762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.077787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.077966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.078001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.078172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.078197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.078336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.078362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.078518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.078544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.078765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.078794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.078972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.079006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.079187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.079212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.079399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.079425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.079602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.079627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.079796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.079821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.079959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.079985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.080137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.080163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.080312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.080337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.080479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.080504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.080659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.080684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.080862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.080905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.081062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.081088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.081298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.081327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.081545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.081571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.081750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.081775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.081923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.081956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.082140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.082165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.082340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.082365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.082536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.528 [2024-07-16 00:28:45.082561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.528 qpair failed and we were unable to recover it. 00:24:53.528 [2024-07-16 00:28:45.082704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.082730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.082906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.082933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.083105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.083131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.083336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.083362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.083542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.083568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.083772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.083802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.084001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.084028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.084168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.084193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.084372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.084397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.084570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.084595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.084836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.084864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.085044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.085070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.085243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.085269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.085437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.085463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.085608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.085634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.085804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.085846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.086063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.086103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.086314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.086344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.086523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.086550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.086796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.086822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.087006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.087033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.087206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.087232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.087536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.087586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.087764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.087790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.087976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.088002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.088161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.088187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.088339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.088365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.088581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.088606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.088762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.088789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.088992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.089018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.089186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.089214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.089412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.089438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.089616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.089641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.089845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.089871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.090057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.090083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.090234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.090259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.090448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.090474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.090627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.090654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.090855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.090887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.091038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.091063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.529 [2024-07-16 00:28:45.091209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.529 [2024-07-16 00:28:45.091235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.529 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.091405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.091431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.091636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.091661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.091807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.091833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.091997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.092023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.092174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.092199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.092387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.092429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.092581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.092610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.092833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.092859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.093028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.093053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.093208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.093234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.093377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.093402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.093590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.093631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.093815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.093843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.094023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.094049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.094228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.094253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.094454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.094480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.094684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.094709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.094912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.094938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.095110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.095136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.095352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.095378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.095524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.095550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.095729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.095754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.095934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.095960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.096146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.096172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.096315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.096341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.096546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.096572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.096737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.096763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.096947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.096973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.097120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.097146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.097315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.097341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.097546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.097575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.097788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.530 [2024-07-16 00:28:45.097816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.530 qpair failed and we were unable to recover it. 00:24:53.530 [2024-07-16 00:28:45.098000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.098027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.098178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.098203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.098345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.098371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.098607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.098633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.098811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.098837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.098999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.099025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.099173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.099199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.099367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.099409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.099578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.099606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.099825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.099850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.100006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.100031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.100178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.100220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.100412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.100610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.100639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.100792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.100818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.100987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.101013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.101159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.101185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.101375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.101403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.101709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.101762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.101970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.101996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.102147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.102172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.102325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.102350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.102554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.102580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.102755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.102783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.102986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.103013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.103211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.103240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.103438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.103464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.103640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.103666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.103862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.103893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.104070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.104095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.104247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.104272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.104453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.104478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.104628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.104655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.104796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.104838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.105019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.105045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.105212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.105237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.105409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.105434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.105583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.105608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.105785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.105811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.105982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.106008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.106147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.531 [2024-07-16 00:28:45.106172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.531 qpair failed and we were unable to recover it. 00:24:53.531 [2024-07-16 00:28:45.106417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.106445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.106832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.106901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.107073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.107098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.107250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.107276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.107441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.107468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.107646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.107690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.107890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.107933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.108112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.108137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.108346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.108371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.108518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.108562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.108783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.108811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.108984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.109010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.109164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.109189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.109461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.109500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.109686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.109732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.109940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.109967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.110142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.110169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.110343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.110387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.110607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.110633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.110891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.110919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.111096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.111122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.111299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.111344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.111539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.111568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.111759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.111786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.112042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.112069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.112291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.112317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.112495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.112521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.112746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.112773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.112953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.112980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.113171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.113201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.113444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.113470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.113700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.113762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.113942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.113968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.114123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.114149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.114301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.114327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.114469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.114495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.114700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.114726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.114929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.114955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.115134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.115161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.115319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.115346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.532 qpair failed and we were unable to recover it. 00:24:53.532 [2024-07-16 00:28:45.115557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.532 [2024-07-16 00:28:45.115585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.115761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.115788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.115952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.115978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.116140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.116168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.116375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.116404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.116600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.116625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.116806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.116831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.116985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.117011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.117178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.117206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.117500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.117529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.117743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.117768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.117919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.117946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.118126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.118152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.118341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.118374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.118556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.118582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.118781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.118807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.118981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.119007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.119184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.119210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.119382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.119411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.119635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.119661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.119800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.119825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.119978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.120004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.120182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.120208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.120423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.120449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.120593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.120618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.120815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.120844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.121024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.121050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.121360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.121420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.121616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.121642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.121795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.121821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.121971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.121997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.122153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.122178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.122353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.122395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.122763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.122819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.123015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.123041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.123222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.123247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.123397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.123422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.123597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.123626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.123844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.123873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.124082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.124108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.124334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.533 [2024-07-16 00:28:45.124362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.533 qpair failed and we were unable to recover it. 00:24:53.533 [2024-07-16 00:28:45.124561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.124590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.124850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.124884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.125079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.125105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.125257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.125283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.125617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.125667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.125905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.125932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.126109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.126135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.126435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.126485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.126701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.126730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.126892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.126934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.127088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.127113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.127324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.127350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.127551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.127579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.127775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.127808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.127990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.128016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.128169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.128195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.128339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.128364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.128560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.128585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.128774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.128804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.128981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.129007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.129158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.129184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.129366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.129391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.129607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.129633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.129831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.129857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.130013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.130039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.130244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.130270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.130438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.130479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.130675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.130703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.130889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.130915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.534 [2024-07-16 00:28:45.131090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.534 [2024-07-16 00:28:45.131115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.534 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.131346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.131374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.131603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.131631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.131798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.131827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.132031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.132057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.132261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.132286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.132460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.132485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.132666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.132692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.132860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.132890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.133043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.811 [2024-07-16 00:28:45.133068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.811 qpair failed and we were unable to recover it. 00:24:53.811 [2024-07-16 00:28:45.133237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.133262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.133406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.133436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.133614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.133640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.133795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.133822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.134035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.134061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.134239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.134265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.134482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.134511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.134682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.134707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.134913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.134942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.135146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.135173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.135356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.135382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.135535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.135560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.135780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.135808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.136020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.136249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.136274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.136454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.136480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.136709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.136734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.136937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.136963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.137131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.137159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.137355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.137381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.137578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.137631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.137791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.137819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.137996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.138021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.138195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.138221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.138427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.138452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.138603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.138628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.138818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.138845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.139066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.139095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.139270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.139295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.139626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.139683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.139884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.139914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.140118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.140143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.140348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.140374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.140576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.140604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.140792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.140818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.141014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.141044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.141212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.141240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.141431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.141457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.141667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.141693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.141864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.141902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.812 [2024-07-16 00:28:45.142079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.812 [2024-07-16 00:28:45.142104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.812 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.142382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.142431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.142633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.142666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.142888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.142914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.143134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.143162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.143352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.143380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.143582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.143608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.143824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.143849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.144023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.144217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.144243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.144422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.144447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.144632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.144657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.144883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.144910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.145100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.145128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.145319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.145347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.145550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.145733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.145759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.145953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.145982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.146153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.146177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.146372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.146424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.146599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.146625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.146824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.146849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.147027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.147053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.147203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.147228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.147401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.147426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.147648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.147677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.147902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.147931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.148128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.148153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.148329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.148354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.148549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.148582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.148781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.148806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.148982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.149011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.149201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.149229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.149428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.149454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.149659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.149688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.149884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.149913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.150087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.150113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.150294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.150322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.150515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.150543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.150748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.150774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.150917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.813 [2024-07-16 00:28:45.150943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.813 qpair failed and we were unable to recover it. 00:24:53.813 [2024-07-16 00:28:45.151113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.151138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.151288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.151314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.151546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.151596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.151779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.151807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.152027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.152053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.152200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.152225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.152371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.152396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.152562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.152588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.152781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.152810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.152999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.153028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.153204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.153230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.153401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.153426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.153597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.153623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.153823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.153848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.153999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.154025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.154227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.154253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.154426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.154452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.154673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.154702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.154866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.154926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.155083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.155108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.155331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.155359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.155586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.155614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.155813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.155839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.156002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.156028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.156179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.156205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.156380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.156405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.156578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.156603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.156829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.156857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.157065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.157091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.157295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.157330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.157524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.157552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.157753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.157779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.157981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.158010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.158182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.158210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.158377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.158402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.158564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.158592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.158787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.158815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.158992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.159018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.159209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.159237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.159454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.159483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.814 [2024-07-16 00:28:45.159688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.814 [2024-07-16 00:28:45.159713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.814 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.159890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.159915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.160096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.160124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.160320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.160345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.160498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.160523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.160694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.160719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.160868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.160900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.161083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.161108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.161338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.161366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.161562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.161588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.161732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.161758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.161920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.161947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.162119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.162145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.162343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.162372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.162542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.162570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.162772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.162798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.162970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.162996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.163166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.163194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.163380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.163406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.163565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.163593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.163788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.163817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.164041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.164067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.164245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.164274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.164488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.164518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.164720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.164746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.164943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.164972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.165135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.165163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.165346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.165372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.165568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.165597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.165762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.165791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.165987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.166013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.166205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.166233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.166429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.166457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.166627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.166653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.166848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.166883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.167081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.167106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.167259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.167285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.167421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.167464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.167660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.167688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.167887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.167913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.815 qpair failed and we were unable to recover it. 00:24:53.815 [2024-07-16 00:28:45.168093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.815 [2024-07-16 00:28:45.168119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.168264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.168290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.168462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.168488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.168678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.168706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.168903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.168933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.169093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.169119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.169313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.169342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.169504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.169534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.169721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.169747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.169907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.169936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.170124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.170152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.170371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.170397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.170593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.170621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.170809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.170837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.171039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.171065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.171221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.171246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.171419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.171445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.171616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.171646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.171838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.171867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.172050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.172076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.172248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.172274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.172448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.172474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.172654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.172683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.172857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.172898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.173100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.173129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.173321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.173350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.173544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.173569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.816 qpair failed and we were unable to recover it. 00:24:53.816 [2024-07-16 00:28:45.173745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.816 [2024-07-16 00:28:45.173773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.173966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.173996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.174193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.174219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.174395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.174420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.174576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.174602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.174750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.174777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.174994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.175023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.175222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.175250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.175444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.175469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.175732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.175781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.176004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.176030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.176207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.176232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.176410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.176436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.176633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.176661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.176887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.176913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.177085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.177113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.177307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.177334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.177557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.177582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.177786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.177815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.178025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.178054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.178266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.178292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.178459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.178489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.178680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.178709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.178884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.178910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.179092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.179118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.179296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.179321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.179498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.179523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.179805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.179858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.180033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.180062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.180220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.180246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.180464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.180492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.180692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.180718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.180902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.180929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.181129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.181157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.181352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.181380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.181575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.181601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.181803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.181844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.182023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.182052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.182247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.182273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.182423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.182448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.182644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.817 [2024-07-16 00:28:45.182669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.817 qpair failed and we were unable to recover it. 00:24:53.817 [2024-07-16 00:28:45.182872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.182906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.183070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.183099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.183263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.183292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.183486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.183512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.183664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.183689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.183868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.183902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.184139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.184165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.184316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.184342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.184517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.184545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.184734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.184759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.184986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.185015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.185211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.185240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.185414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.185440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.185636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.185664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.185849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.185885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.186115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.186140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.186343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.186372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.186541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.186574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.186800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.186829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.187063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.187090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.187256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.187285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.187479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.187505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.187695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.187724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.187917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.187945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.188136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.188162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.188364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.188392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.188583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.188611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.188804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.188830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.189014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.189041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.189204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.189234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.189436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.189621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.189646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.189822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.189848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.190031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.190057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.190227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.190256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.190446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.190475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.818 qpair failed and we were unable to recover it. 00:24:53.818 [2024-07-16 00:28:45.190667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.818 [2024-07-16 00:28:45.190692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.190867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.190909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.191083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.191108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.191327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.191352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.191551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.191580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.191733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.191761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.191941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.191967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.192136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.192162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.192332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.192357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.192514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.192540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.192703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.192732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.192921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.192950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.193119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.193144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.193330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.193358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.193579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.193607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.193797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.193822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.194050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.194079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.194271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.194299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.194522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.194548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.194738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.194766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.194957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.194982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.195131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.195156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.195329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.195358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.195517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.195542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.195743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.195769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.195962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.195988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.196158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.196184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.196389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.196414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.196608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.196636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.196822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.196850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.197031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.197056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.197281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.197309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.197506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.197535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.197708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.197734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.197953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.197982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.198147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.198176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.198405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.198431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.198637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.198665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.198861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.198904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.199110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.199135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.199332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.819 [2024-07-16 00:28:45.199357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.819 qpair failed and we were unable to recover it. 00:24:53.819 [2024-07-16 00:28:45.199563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.199591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.199759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.199786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.199980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.200010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.200170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.200198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.200398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.200423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.200621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.200646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.200824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.200852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.201054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.201079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.201278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.201314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.201478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.201507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.201659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeda3f0 is same with the state(5) to be set 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Read completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 Write completed with error (sct=0, sc=8) 00:24:53.820 starting I/O failed 00:24:53.820 [2024-07-16 00:28:45.202067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:53.820 [2024-07-16 00:28:45.202269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.202312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.202517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.202550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.202780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.202806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.203028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.203056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.203230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.203264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.203490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.203516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.203819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.820 [2024-07-16 00:28:45.203883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.820 qpair failed and we were unable to recover it. 00:24:53.820 [2024-07-16 00:28:45.204054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.204080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.204263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.204289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.204491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.204521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.204822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.204873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.205088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.205114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.205309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.205339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.205529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.205558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.205782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.205809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.205960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.205986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.206171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.206215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.206422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.206448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.206772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.206827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.207060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.207087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.207290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.207317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.207491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.207519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.207715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.207744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.207948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.207976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.208152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.208195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.208386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.208416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.208581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.208608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.208804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.208833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.209060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.209086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.209283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.209309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.209509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.209538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.209737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.209766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.209971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.209997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.210176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.210202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.210394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.210423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.210624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.210652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.210846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.210882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.211101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.211131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.211308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.211335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.211546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.211602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.211795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.211824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.212061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.212088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.212321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.212351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.212720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.212784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.213012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.821 [2024-07-16 00:28:45.213038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.821 qpair failed and we were unable to recover it. 00:24:53.821 [2024-07-16 00:28:45.213200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.213227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.213402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.213428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.213603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.213630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.213850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.213886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.214060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.214086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.214234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.214260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.214410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.214436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.214678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.214731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.214914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.214941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.215097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.215124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.215299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.215325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.215502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.215528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.215719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.215748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.215918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.215948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.216121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.216148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.216327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.216353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.216560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.216588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.216783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.216811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.217030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.217060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.217219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.217248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.217479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.217506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.217719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.217745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.217901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.217928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.218133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.218160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.218364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.218392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.218585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.218615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.218811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.218841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.219054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.219083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.219279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.219308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.219505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.219531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.219733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.219762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.219953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.219983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.220179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.220205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.220400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.220430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.220628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.220671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.220969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.220996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.221147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.822 [2024-07-16 00:28:45.221192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.822 qpair failed and we were unable to recover it. 00:24:53.822 [2024-07-16 00:28:45.221388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.221415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.221616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.221645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.221813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.221842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.222055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.222082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.222222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.222249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.222447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.222473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.222704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.222733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.222907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.222933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.223160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.223190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.223396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.223424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.223647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.223673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.223871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.223908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.224084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.224113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.224309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.224335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.224507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.224537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.224756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.224785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.224963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.224991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.225179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.225209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.225403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.225434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.225632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.225658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.225858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.225893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.226083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.226112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.226309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.226335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.226555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.226584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.226747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.226775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.227004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.227031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.227263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.227292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.227461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.227490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.227715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.227742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.227944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.227978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.228198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.228227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.228418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.228444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.228637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.228666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.228862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.228897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.229097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.229124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.229349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.229378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.229565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.229594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.229823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.229849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.230057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.823 [2024-07-16 00:28:45.230087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.823 qpair failed and we were unable to recover it. 00:24:53.823 [2024-07-16 00:28:45.230313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.230339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.230516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.230543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.230719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.230748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.230983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.231010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.231220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.231247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.231452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.231481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.231642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.231671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.231872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.231903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.232123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.232153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.232376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.232406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.232597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.232624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.232815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.232845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.233088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.233115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.233298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.233323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.233521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.233550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.233754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.233784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.233960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.233986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.234170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.234197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.234401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.234429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.234653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.234679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.234888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.234919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.235117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.235146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.235347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.235373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.235594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.235623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.235822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.235849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.236045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.236072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.236269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.236298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.236531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.236560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.236727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.236753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.236979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.237008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.237245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.237275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.237457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.237483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.237685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.237714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.237905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.237935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.238152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.238178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.238360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.238385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.238592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.238621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.238817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.238843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.239080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.239110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.239332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.824 [2024-07-16 00:28:45.239361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.824 qpair failed and we were unable to recover it. 00:24:53.824 [2024-07-16 00:28:45.239532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.239557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.239751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.239779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.239953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.239983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.240193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.240220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.240416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.240445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.240613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.240643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.240870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.240903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.241101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.241130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.241322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.241351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.241553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.241579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.241773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.241802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.242030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.242059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.242289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.242315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.242551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.242580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.242741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.242772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.242972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.242998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.243218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.243247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.243474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.243504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.243677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.243702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.243895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.243945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.244167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.244196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.244404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.244430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.244606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.244636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.244837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.244865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.245077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.245103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.245306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.245336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.245531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.245557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.245765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.245791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.245995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.246024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.246241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.246270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.246502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.246533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.246760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.246790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.246960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.246990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.247193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.825 [2024-07-16 00:28:45.247219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.825 qpair failed and we were unable to recover it. 00:24:53.825 [2024-07-16 00:28:45.247363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.247389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.247562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.247588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.247770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.247796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.247963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.247992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.248197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.248226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.248430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.248455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.248676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.248705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.248903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.248937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.249136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.249162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.249342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.249368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.249574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.249603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.249795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.249821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.250058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.250087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.250259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.250290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.250493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.250519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.250722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.250751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.250948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.250978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.251185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.251212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.251384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.251412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.251571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.251601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.251794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.251821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.251994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.252024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.252218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.252247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.252455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.252480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.252674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.252702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.252914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.252952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.253167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.253193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.253388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.253417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.253613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.253643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.253845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.253870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.254115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.254141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.254321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.254348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.254550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.254576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.254779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.254808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.255026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.255055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.255228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.255254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.255453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.255486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.255659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.255689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.255895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.255922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.256150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-16 00:28:45.256179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.826 qpair failed and we were unable to recover it. 00:24:53.826 [2024-07-16 00:28:45.256407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.256436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.256637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.256663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.256860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.256894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.257087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.257115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.257349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.257375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.257570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.257599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.257796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.257826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.258007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.258034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.258252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.258281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.258505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.258533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.258743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.258769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.259006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.259036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.259208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.259236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.259457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.259483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.259652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.259682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.259882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.259912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.260118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.260145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.260376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.260404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.260642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.260672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.260860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.260894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.261104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.261130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.261365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.261394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.261564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.261591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.261829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.261858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.262041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.262068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.262227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.262253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.262432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.262458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.262607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.262632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.262807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.262833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.263009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.263039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.263234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.263263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.263464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.263490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.263661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.263690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.263852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.263892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.264055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.264081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.264306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.264334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.264525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.264559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.264755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.264781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.264982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.265015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.265215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.265241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.265390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-16 00:28:45.265417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.827 qpair failed and we were unable to recover it. 00:24:53.827 [2024-07-16 00:28:45.265605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.265634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.265856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.265892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.266080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.266107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.266276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.266307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.266528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.266557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.266762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.266788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.267005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.267047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.267245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.267274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.267476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.267503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.267713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.267742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.267940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.267969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.268168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.268194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.268424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.268453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.268646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.268675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.268849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.268884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.269115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.269142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.269343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.269369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.269599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.269626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.269834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.269863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.270081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.270110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.270341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.270367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.270596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.270625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.270796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.270825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.271012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.271045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.271248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.271290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.271454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.271482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.271709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.271735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.271942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.271971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.272159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.272188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.272394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.272420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.272572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.272597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.272800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.272828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.273067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.273094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.273285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.273311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.273540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.273568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.273771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.273801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.273983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.274013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.274219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.274245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.274433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.274459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.274662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.274692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.274908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-16 00:28:45.274939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.828 qpair failed and we were unable to recover it. 00:24:53.828 [2024-07-16 00:28:45.275143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.275169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.275371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.275401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.275601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.275630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.275806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.275832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.275987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.276015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.276214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.276242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.276435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.276461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.276662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.276691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.276901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.276931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.277155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.277181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.277417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.277446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.277667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.277696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.277921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.277948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.278148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.278178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.278349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.278377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.278600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.278626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.278828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.278857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.279069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.279098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.279321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.279347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.279551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.279579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.279798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.279827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.280069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.280095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.280308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.280337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.280541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.280568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.280724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.280750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.280944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.280973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.281139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.281168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.281360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.281386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.281581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.281611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.281772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.281801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.282001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.282028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.282264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.282293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.282491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.282520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.282693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.829 [2024-07-16 00:28:45.282718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.829 qpair failed and we were unable to recover it. 00:24:53.829 [2024-07-16 00:28:45.282942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.282976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.283139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.283169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.283374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.283400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.283602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.283631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.283851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.283891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.284124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.284150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.284345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.284373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.284583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.284612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.284810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.284837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.285072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.285099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.285281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.285306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.285474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.285500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.285695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.285724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.285893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.285923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.286125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.286152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.286356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.286385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.286552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.286581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.286781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.286807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.287013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.287042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.287251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.287277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.287453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.287480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.287707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.287736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.287954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.287983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.288220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.288246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.288473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.288502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.288692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.288721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.288912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.288938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.289141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.289170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.289365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.289395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.289595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.289621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.289791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.289821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.289992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.290022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.290226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.290252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.290460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.290489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.290681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.290711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.290889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.290915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.291139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.291168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.291343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.291372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.291565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.291591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.291760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.291790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.291990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.830 [2024-07-16 00:28:45.292023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.830 qpair failed and we were unable to recover it. 00:24:53.830 [2024-07-16 00:28:45.292248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.292274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.292479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.292509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.292703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.292731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.292920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.292947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.293159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.293188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.293392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.293421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.293594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.293620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.293809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.293839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.294041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.294070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.294264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.294292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.294513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.294542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.294738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.294767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.294960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.294987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.295189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.295219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.295413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.295443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.295635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.295661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.295900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.295944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.296096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.296122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.296358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.296384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.296595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.296624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.296791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.296820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.296993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.297020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.297214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.297242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.297465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.297494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.297673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.297699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.297892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.297921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.298123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.298153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.298336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.298362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.298524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.298553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.298739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.298768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.298993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.299020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.299229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.299259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.299457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.299484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.299692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.299718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.299948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.299978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.300176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.300205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.300409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.300435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.300604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.300633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.300828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.300857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.301031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.301062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.301293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.831 [2024-07-16 00:28:45.301322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.831 qpair failed and we were unable to recover it. 00:24:53.831 [2024-07-16 00:28:45.301513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.301543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.301771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.301797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.301976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.302006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.302206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.302234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.302460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.302486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.302661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.302689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.302919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.302949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.303163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.303189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.303368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.303394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.303574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.303603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.303801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.303827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.304030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.304060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.304261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.304290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.304486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.304513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.304712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.304741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.304969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.305016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.305194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.305220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.305400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.305426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.305631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.305661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.305859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.305891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.306062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.306091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.306265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.306294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.306488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.306513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.306733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.306761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.306984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.307014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.307216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.307243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.307445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.307474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.307692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.307720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.307890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.307917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.308116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.308143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.308386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.308415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.308635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.308661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.308863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.308901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.309101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.309130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.309358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.309384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.309584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.309613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.309807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.309835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.310028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.310054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.310242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.310275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.310468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.310497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.310717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.832 [2024-07-16 00:28:45.310743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.832 qpair failed and we were unable to recover it. 00:24:53.832 [2024-07-16 00:28:45.310929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.310956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.311138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.311164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.311336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.311362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.311559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.311588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.311805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.311833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.312012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.312040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.312224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.312250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.312475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.312504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.312738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.312764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.312975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.313001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.313204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.313230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.313426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.313454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.313673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.313703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.313921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.313951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.314161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.314187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.314389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.314418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.314610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.314639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.314844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.314870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.315040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.315067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.315246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.315272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.315509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.315535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.315720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.315746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.315920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.315947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.316116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.316142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.316357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.316400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.316612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.316642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.316818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.316844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.317022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.317049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.317200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.317226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.317375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.317401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.317672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.317722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.317914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.317944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.318141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.318167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.318350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.318376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.318576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.318606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.318813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.318840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.319013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.319040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.319210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.319240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.319385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.833 [2024-07-16 00:28:45.319412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.833 qpair failed and we were unable to recover it. 00:24:53.833 [2024-07-16 00:28:45.319670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.319723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.319948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.319978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.320198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.320224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.320415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.320444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.320662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.320691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.320867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.320900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.321076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.321103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.321306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.321335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.321559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.321584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.321757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.321787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.321984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.322014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.322199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.322226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.322446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.322493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.322654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.322684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.322853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.322887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.323060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.323089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.323250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.323280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.323508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.323775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.323801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.323998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.324025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.324236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.324262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.324429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.324459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.324680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.324886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.324913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.325081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.325110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.325338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.325367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.325570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.325596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.325766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.325795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.325993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.326023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.326188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.326214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.326435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.326464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.326631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.326661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.326852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.326886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.327054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.327083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.327278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.327309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.327537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.327563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.327738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.834 [2024-07-16 00:28:45.327767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.834 qpair failed and we were unable to recover it. 00:24:53.834 [2024-07-16 00:28:45.327959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.327989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.328154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.328184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.328408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.328438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.328662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.328688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.328867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.328900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.329099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.329128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.329301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.329329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.329535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.329562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.329747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.329774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.329926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.329953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.330106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.330131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.330298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.330327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.330519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.330695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.330721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.330895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.330923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.331124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.331154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.331361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.331387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.331558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.331587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.331780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.331810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.332012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.332039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.332262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.332292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.332488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.332516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.332689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.332715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.332940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.332970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.333138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.333167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.333343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.333369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.333563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.333592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.333764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.333793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.333992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.334023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.334200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.334228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.334418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.334447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.334615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.334641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.334822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.334848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.335029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.335056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.335231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.335257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.335440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.335466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.335695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.335724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.335938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.335965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.336136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.336167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.336365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.835 [2024-07-16 00:28:45.336395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.835 qpair failed and we were unable to recover it. 00:24:53.835 [2024-07-16 00:28:45.336566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.336592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.336821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.336850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.337053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.337082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.337251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.337277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.337473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.337502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.337701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.337731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.337903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.337930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.338156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.338185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.338350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.338380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.338544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.338571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.338729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.338756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.338976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.339007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.339168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.339194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.339422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.339451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.339616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.339646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.339844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.339871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.340041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.340067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.340265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.340291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.340535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.340561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.340755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.340784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.340954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.340985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.341164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.341190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.341384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.341414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.341581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.341611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.341788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.341815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.342019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.342049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.342214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.342245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.342447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.342474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.342676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.342710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.342900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.342930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.343134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.343160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.343313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.343357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.343550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.343579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.343756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.343782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.343966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.343993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.344143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.344170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.344348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.344374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.344589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.344618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.344834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.344863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.345049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.345076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.345276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.836 [2024-07-16 00:28:45.345305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.836 qpair failed and we were unable to recover it. 00:24:53.836 [2024-07-16 00:28:45.345528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.345557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.345767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.345793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.345945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.345972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.346166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.346195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.346397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.346423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.346651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.346680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.346883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.346913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.347087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.347113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.347263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.347291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.347525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.347555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.347790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.347817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.348002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.348032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.348226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.348255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.348457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.348483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.348686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.348717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.348888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.348918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.349113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.349139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.349293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.349319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.349522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.349564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.349741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.349768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.349936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.349966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.350130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.350160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.350327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.350354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.350580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.350609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.350801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.350829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.351036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.351063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.351262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.351291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.351483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.351517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.351682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.351709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.351937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.351967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.352133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.352162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.352338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.352364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.352536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.352565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.352734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.352763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.352956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.352982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.353139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.353182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.353373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.353401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.353603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.353629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.353830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.353858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.354065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.354091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.354288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.837 [2024-07-16 00:28:45.354314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.837 qpair failed and we were unable to recover it. 00:24:53.837 [2024-07-16 00:28:45.354523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.354550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.354755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.354780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.354988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.355016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.355186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.355215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.355434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.355463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.355632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.355659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.355890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.355920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.356089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.356118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.356314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.356340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.356566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.356595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.356767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.356796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.357022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.357049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.357277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.357306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.357468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.357499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.357693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.357719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.357868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.357907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.358065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.358091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.358242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.358269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.358470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.358500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.358667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.358697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.358896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.358922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.359126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.359156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.359349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.359379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.359611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.359638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.359834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.359864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.360217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.360247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.360484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.360515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.360740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.360769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.360932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.360962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.361182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.361208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.361358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.361384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.361597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.361638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.361830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.361856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.362069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.362098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.838 qpair failed and we were unable to recover it. 00:24:53.838 [2024-07-16 00:28:45.362300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.838 [2024-07-16 00:28:45.362328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.362508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.362535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.362717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.362744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.362936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.362963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.363152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.363178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.363341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.363370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.363575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.363604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.363770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.363796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.363976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.364002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.364201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.364230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.364411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.364437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.364639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.364668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.364864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.364900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.365103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.365129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.365327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.365356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.365554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.365583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.365754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.365780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.366022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.366052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.366258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.366287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.366516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.366541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.366738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.366767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.366960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.366990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.367189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.367216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.367411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.367437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.367652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.367681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.367858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.367892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.368095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.368139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.368347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.368376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.368574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.368600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.368778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.368804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.369022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.369048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.369259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.369285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.369461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.369492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.369632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.369657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.369816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.369842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.370019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.370048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.370267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.370296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.370485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.370511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.370714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.370744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.370950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.370977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.371180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.839 [2024-07-16 00:28:45.371206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.839 qpair failed and we were unable to recover it. 00:24:53.839 [2024-07-16 00:28:45.371388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.371414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.371614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.371640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.371814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.371840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.372030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.372056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.372232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.372258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.372429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.372455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.372628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.372655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.372840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.372866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.373053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.373080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.373239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.373266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.373434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.373461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.373635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.373661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.373851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.373894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.374080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.374106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.374257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.374283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.374458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.374484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.374659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.374686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.374830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.374856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.375033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.375059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.375217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.375245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.375401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.375428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.375609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.375636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.375781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.375808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.376010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.376037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.376188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.376214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.376393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.376419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.376595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.376626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.376782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.376808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.376963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.376989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.377145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.377171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.377336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.377363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.377519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.377549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.377700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.377726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.377935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.377961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.378165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.378191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.378405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.378431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.378631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.378660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.378838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.378865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.379059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.379085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.379239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.379265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.840 qpair failed and we were unable to recover it. 00:24:53.840 [2024-07-16 00:28:45.379477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.840 [2024-07-16 00:28:45.379504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.379676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.379703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.379884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.379915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.380088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.380115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.380306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.380334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.380546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.380573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.380747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.380776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.380995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.381022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.381175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.381201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.381342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.381368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.381577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.381606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.381795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.381824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.382009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.382036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.382214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.382241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.382416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.382442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.382616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.382643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.382838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.382867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.383068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.383095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.383261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.383287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.383440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.383466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.383644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.383670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.383849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.383888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.384046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.384073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.384253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.384280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.384481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.384506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.384684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.384714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.384924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.384951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.385098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.385124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.385276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.385302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.385473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.385499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.385677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.385703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.385905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.385956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.386146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.386173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.386373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.386399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.386545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.386572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.386723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.386749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.386926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.386953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.387144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.387170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.387317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.387343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.387519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.387545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.387728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.841 [2024-07-16 00:28:45.387754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.841 qpair failed and we were unable to recover it. 00:24:53.841 [2024-07-16 00:28:45.387907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.387944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.388102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.388128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.388330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.388361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.388555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.388584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.388785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.388812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.388975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.389001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.389179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.389205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.389415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.389441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.389641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.389670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.389869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.389907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.390091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.390117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.390269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.390295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.390519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.390549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.390718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.390744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.390939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.390969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.391133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.391162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.391337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.391363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.391534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.391573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.391733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.391762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.391950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.391978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.392165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.392192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.392379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.392417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.392567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.392592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.392772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.392806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.393017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.393043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.393222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.393249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.393421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.393446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.393619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.393645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.393851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.393896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.394070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.394097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.394332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.394368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.394568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.394595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.394795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.394824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.395037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.395064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.395273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.395299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.842 [2024-07-16 00:28:45.395505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.842 [2024-07-16 00:28:45.395534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.842 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.395764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.395791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.395945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.395972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.396161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.396188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.396368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.396394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.396579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.396606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.396812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.396839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.397032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.397059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.397215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.397241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.397525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.397553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.397734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.397761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.397941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.397969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.398126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.398152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.398308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.398335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.398516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.398544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.398716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.398745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.398929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.398959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.399889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.399923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.400151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.400182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.400413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.400443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.400620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.400647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.400820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.400849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.401051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.401079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.401238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.401264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.403889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.403932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.404152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.404182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.404387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.404424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.404615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.404645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.404875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.404911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.405091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.405118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.405337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.405371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.405581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.405609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.405817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.405844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:53.843 [2024-07-16 00:28:45.406024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.843 [2024-07-16 00:28:45.406052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:53.843 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.406209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.406242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.406397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.406429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.406617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.406643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.406797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.406824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.406998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.407025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.407181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.407208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.407370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.407396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.407680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.407715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.407867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.407901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.408078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.408104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.408264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.408291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.408450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.408475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.408655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.408688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.408868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.408902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.409130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.409156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.412903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.412933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.413146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.413173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.413351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.413377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.413540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.413566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.413751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.413778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.413940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.413968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.414125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.414152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.414310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.414336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.414530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.414567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.414752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.414779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.414933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.414960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.415132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.415159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.415336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.415362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.415511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.415537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.415739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.415766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.415960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.415986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.416139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.416165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.114 [2024-07-16 00:28:45.416311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.114 [2024-07-16 00:28:45.416337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.114 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.416517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.416542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.416718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.416743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.416923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.416950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.417136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.417161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.417312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.417338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.417520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.417547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.417724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.417750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.417900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.417927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.418075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.418107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.418306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.418332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.418499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.418524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.418671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.418696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.418872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.418906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.419085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.419111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.419266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.419292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.419472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.419497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.419669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.419694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.419871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.419902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.420055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.420080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.420256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.420282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.420427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.420452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.420629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.420656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.420807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.420833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.421017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.421044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.421240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.421266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.421413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.421439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.421612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.421638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.421791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.421816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.421996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.422023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.422173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.422199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.422343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.422369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.422543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.422569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.422719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.422745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.422919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.422947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.423151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.423177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.423346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.423373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.423549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.423585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.423785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.423811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.423971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.423997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.424145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.115 [2024-07-16 00:28:45.424171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.115 qpair failed and we were unable to recover it. 00:24:54.115 [2024-07-16 00:28:45.424345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.424371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.424542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.424568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.424746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.424772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.424951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.424978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.425154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.425179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.425380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.425406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.425583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.425609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.425784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.425809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.425963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.425993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.426179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.426206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.426415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.426443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.426609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.426637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.426825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.426851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.427030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.427058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.427237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.427263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.427403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.427429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.427613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.427639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.427781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.427807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.427964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.427990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.428133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.428159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.428324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.428351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.428531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.428557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.428701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.428727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.428906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.428933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.429083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.429109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.429305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.429331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.429503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.429529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.429685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.429712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.429900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.429927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.430079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.430106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.430284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.430309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.430508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.430534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.430715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.430740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.430891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.430917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.431065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.431091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.431271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.431310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.431465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.431492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.431643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.431669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.431868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.431902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.432060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.432087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.116 [2024-07-16 00:28:45.432239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.116 [2024-07-16 00:28:45.432265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.116 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.432412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.432437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.432593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.432618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.432791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.432816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.432993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.433020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.433189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.433214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.433360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.433385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.433561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.433587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.433762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.433788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.433939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.433965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.434113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.434139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.434347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.434373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.434521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.434547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.434727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.434752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.434890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.434916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.435061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.435087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.435260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.435285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.435430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.435455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.435599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.435625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.435776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.435801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.435956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.435983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.436156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.436182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.436336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.436366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.436540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.436566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.436716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.436741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.436939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.436965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.437114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.437140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.437320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.437346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.437546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.437571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.437713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.437738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.437884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.437911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.438079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.438105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.438245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.438271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.438413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.438438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.438592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.438618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.438757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.438782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.438940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.438966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.439119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.439144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.439287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.439313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.439462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.439487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.439662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.439687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.439829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.117 [2024-07-16 00:28:45.439855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.117 qpair failed and we were unable to recover it. 00:24:54.117 [2024-07-16 00:28:45.440027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.440066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.440252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.440280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.440481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.440507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.440657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.440685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.440863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.440899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.441077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.441104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.441278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.441304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.441457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.441488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.441639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.441664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.441815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.441842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.442007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.442046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.442313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.442340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.442596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.442623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.442802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.442829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.443015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.443041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.443225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.443263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.443454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.443480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.443660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.443687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.443857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.443889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.444041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.444076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.444236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.444263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.444447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.444473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.444645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.444670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.444847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.444873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.445033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.445059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.445230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.445257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.445435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.445461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.445613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.445639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.445840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.445866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.446019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.446045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.446197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.118 [2024-07-16 00:28:45.446222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.118 qpair failed and we were unable to recover it. 00:24:54.118 [2024-07-16 00:28:45.446405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.446432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.446584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.446611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.446788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.446814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.446968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.446995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.447174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.447199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.447455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.447482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.447685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.447712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.447897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.447924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.448075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.448101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.448288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.448319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.448502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.448529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.448682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.448707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.448896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.448923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.449102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.449128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.449300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.449325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.449478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.449503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.449708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.449740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.449895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.449921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.450120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.450145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.450308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.450335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.450484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.450511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.450686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.450712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.450862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.450896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.451068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.451093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.451266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.451291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.451491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.451518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.451677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.451702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.451874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.451905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.452061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.452086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.452263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.452288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.452439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.452466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.452613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.452639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.452813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.452839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.453014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.453040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.453189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.453215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.453365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.453391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.453573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.453601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.453759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.453785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.453954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.453981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.454131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.454172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.454362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.119 [2024-07-16 00:28:45.454388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.119 qpair failed and we were unable to recover it. 00:24:54.119 [2024-07-16 00:28:45.454602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.454628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.454783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.454809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.454969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.454997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.455153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.455179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.455343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.455371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.455518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.455544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.455719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.455745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.455894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.455920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.456104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.456130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.456279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.456312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.456491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.456517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.456678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.456705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.456889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.456916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.457057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.457083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.457297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.457325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.457512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.457545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.457725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.457751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.457909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.457936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.458115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.458142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.458299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.458327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.458481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.458507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.458662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.458688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.458868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.458905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.459084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.459110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.459291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.459316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.459470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.459497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.459645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.459672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.459817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.459843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.459999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.460026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.460178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.460204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.460362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.460388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.460570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.460607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.460798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.460825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.460975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.461002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.461175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.461210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.461372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.461397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.461573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.461598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.461766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.461792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.461942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.461968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.462115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.462141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.120 [2024-07-16 00:28:45.462313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.120 [2024-07-16 00:28:45.462339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.120 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.462481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.462507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.462658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.462684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.462866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.462898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.463071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.463097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.463295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.463320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.463522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.463547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.463723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.463748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.463930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.463956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.464109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.464135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.464285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.464310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.464457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.464482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.464661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.464687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.464887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.464914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.465089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.465115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.465290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.465315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.465493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.465519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.465706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.465731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.465934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.465960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.466142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.466168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.466353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.466378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.466550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.466575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.466728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.466755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.466955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.466981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.467124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.467149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.467294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.467320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.467519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.467545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.467725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.467751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.467957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.467983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.468146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.468172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.468370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.468396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.468568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.468593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.468742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.468767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.468955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.468980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.469152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.469178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.469355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.469380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.469553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.469578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.469764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.469790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.469936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.469962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.470111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.470136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.470310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.121 [2024-07-16 00:28:45.470336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.121 qpair failed and we were unable to recover it. 00:24:54.121 [2024-07-16 00:28:45.470510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.470536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.470711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.470741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.470921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.470947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.471113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.471139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.471311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.471336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.471511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.471537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.471693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.471720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.471900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.471927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.472106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.472132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.472278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.472304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.472479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.472505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.472702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.472727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.472881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.472907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.473085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.473111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.473259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.473284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.473473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.473498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.473698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.473723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.473926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.473952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.474107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.474134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.474308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.474334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.474533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.474559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.474734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.474759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.474909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.474936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.475100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.475126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.475281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.475308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.475455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.475481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.475680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.475706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.475855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.475886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.476049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.476075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.476214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.476240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.476439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.476464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.476615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.476640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.476794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.122 [2024-07-16 00:28:45.476820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.122 qpair failed and we were unable to recover it. 00:24:54.122 [2024-07-16 00:28:45.477003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.477029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.477203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.477229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.477369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.477394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.477560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.477586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.477727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.477752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.477934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.477960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.478138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.478164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.478333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.478359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.478534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.478564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.478711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.478736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.478893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.478919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.479091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.479116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.479291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.479317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.479484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.479509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.479681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.479706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.479885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.479911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.480052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.480077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.480235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.480260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.480407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.480433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.480632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.480657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.480804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.480830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.480985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.481012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.481218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.481244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.481422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.481448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.481619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.481644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.481786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.481812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.482016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.482042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.482217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.482243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.482396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.482422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.482565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.482590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.482790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.482815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.482982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.483008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.483183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.483208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.483390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.483415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.483588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.483615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.483825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.483850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.484038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.484064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.484239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.484265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.484438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.484464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.484638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.484664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.123 [2024-07-16 00:28:45.484836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.123 [2024-07-16 00:28:45.484861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.123 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.485035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.485060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.485230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.485256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.485429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.485455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.485604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.485631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.485832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.485857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.486022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.486048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.486219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.486245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.486422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.486452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.486631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.486657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.486837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.486862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.487016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.487042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.487198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.487224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.487397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.487422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.487575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.487600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.487772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.487797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.487977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.488003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.488176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.488201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.488376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.488401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.488550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.488575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.488772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.488798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.488946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.488972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.489180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.489206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.489385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.489411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.489613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.489638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.489810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.489836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.489996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.490023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.490196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.490222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.490400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.490425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.490606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.490632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.490779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.490805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.490961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.490988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.491162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.491188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.491338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.491363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.491561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.491586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.491743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.491768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.491920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.491947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.492149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.492175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.492352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.492377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.492525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.124 [2024-07-16 00:28:45.492550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.124 qpair failed and we were unable to recover it. 00:24:54.124 [2024-07-16 00:28:45.492721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.492746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.492898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.492925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.493123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.493148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.493323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.493349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.493490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.493515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.493650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.493676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.493854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.493896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.494071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.494097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.494267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.494297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.494452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.494478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.494653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.494680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.494886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.494913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.495087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.495113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.495282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.495308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.495507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.495532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.495708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.495733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.495881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.495907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.496057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.496082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.496231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.496256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.496429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.496456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.496597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.496623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.496797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.496822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.497016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.497042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.497197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.497223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.497397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.497422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.497571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.497597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.497745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.497770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.497922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.497949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.498093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.498118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.498295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.498321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.498495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.498521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.498686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.498711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.498864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.498896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.499042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.499067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.499212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.499238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.499445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.499471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.499649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.499674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.499823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.499848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.500031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.500057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.500243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.125 [2024-07-16 00:28:45.500268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.125 qpair failed and we were unable to recover it. 00:24:54.125 [2024-07-16 00:28:45.500448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.500474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.500645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.500670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.500848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.500882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.501040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.501065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.501215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.501242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.501447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.501473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.501617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.501643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.501821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.501847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.502009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.502040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.502195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.502221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.502431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.502457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.502658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.502683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.502826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.502851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.503038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.503065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.503247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.503272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.503443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.503469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.503615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.503641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.503813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.503839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.503999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.504025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.504200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.504226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.504403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.504429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.504605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.504630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.504780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.504806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.504978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.505005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.505178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.505204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.505356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.505382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.505557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.505582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.505781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.505806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.505976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.506002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.506178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.506203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.506400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.506425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.506598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.506624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.506797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.506823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.506993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.507019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.507168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.507194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.507368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.507394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.507545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.507571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.507722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.507748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.507958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.507983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.508129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.508155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.126 [2024-07-16 00:28:45.508323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.126 [2024-07-16 00:28:45.508348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.126 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.508499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.508524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.508700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.508726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.508866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.508898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.509067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.509092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.509243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.509270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.509476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.509502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.509673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.509698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.509867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.509913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.510121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.510147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.510325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.510350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.510526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.510556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.510729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.510755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.510927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.510953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.511127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.511153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.511327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.511352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.511526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.511553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.511730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.511756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.511953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.511980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.512124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.512151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.512326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.512351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.512550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.512576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.512755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.512782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.512959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.512996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.513203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.513229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.513404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.513429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.513603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.513628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.513778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.513803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.513978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.514004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.514179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.514205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.514380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.514405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.514583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.514608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.514756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.514781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.514931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.514957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.515111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.127 [2024-07-16 00:28:45.515138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.127 qpair failed and we were unable to recover it. 00:24:54.127 [2024-07-16 00:28:45.515291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.515317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.515514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.515540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.515723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.515749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.515906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.515932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.516077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.516102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.516256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.516282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.516457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.516483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.516634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.516659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.516835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.516861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.517071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.517097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.517243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.517268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.517438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.517464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.517610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.517636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.517787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.517817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.517992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.518019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.518166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.518192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.518340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.518366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.518510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.518536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.518687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.518714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.518869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.518901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.519049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.519074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.519252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.519282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.519438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.519464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.519643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.519669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.519844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.519870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.520029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.520055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.520254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.520280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.520436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.520462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.520638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.520663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.520845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.520870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.521036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.521062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.521240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.521266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.521448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.521474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.521655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.521680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.521827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.521852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.522059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.522085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.522234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.522260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.522413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.522438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.522638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.522664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.522841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.522867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.523052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.128 [2024-07-16 00:28:45.523077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.128 qpair failed and we were unable to recover it. 00:24:54.128 [2024-07-16 00:28:45.523244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.523270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.523452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.523477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.523625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.523650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.523834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.523859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.524010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.524038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.524214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.524240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.524411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.524436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.524584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.524610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.524791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.524816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.524962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.524989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.525162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.525188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.525342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.525367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.525538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.525568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.525721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.525746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.525925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.525952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.526130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.526155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.526355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.526381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.526553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.526578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.526758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.526784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.526932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.526959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.527134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.527159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.527358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.527384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.527523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.527550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.527693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.527719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.527899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.527925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.528126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.528152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.528327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.528353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.528497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.528523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.528670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.528696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.528847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.528873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.529059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.529085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.529239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.529265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.529442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.529468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.529638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.529664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.529860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.529891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.530066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.530091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.530260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.530286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.530456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.530482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.530655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.530681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.530862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.530893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.129 [2024-07-16 00:28:45.531066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.129 [2024-07-16 00:28:45.531092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.129 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.531248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.531275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.531474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.531500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.531674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.531700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.531885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.531911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.532086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.532111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.532254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.532279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.532424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.532449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.532593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.532620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.532789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.532815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.532992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.533019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.533162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.533189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.533387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.533417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.533562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.533588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.533756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.533782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.533980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.534006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.534160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.534186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.534359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.534386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.534590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.534616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.534764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.534790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.534963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.534989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.535163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.535189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.535338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.535364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.535521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.535546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.535734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.535759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.535935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.535961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.536137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.536163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.536346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.536371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.536572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.536598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.536753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.536779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.536928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.536954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.537098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.537124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.537283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.537309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.537459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.537486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.537661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.537903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.537929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.538079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.538104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.538275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.538300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.538473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.538498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.538678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.538704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.538869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.538901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.130 [2024-07-16 00:28:45.539076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.130 [2024-07-16 00:28:45.539102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.130 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.539257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.539282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.539459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.539484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.539681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.539707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.539885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.539911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.540056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.540081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.540257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.540283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.540480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.540505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.540683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.540709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.540887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.540913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.541086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.541112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.541254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.541285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.541442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.541467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.541639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.541665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.541842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.541867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.542052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.542077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.542233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.542261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.542441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.542466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.542637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.542663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.542806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.542832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.543019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.543046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.543254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.543279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.543419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.543445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.543585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.543612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.543813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.543838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.544048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.544074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.544219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.544244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.544412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.544438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.544589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.544615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.544794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.544819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.544996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.545022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.545172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.545197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.545371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.545396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.545593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.545619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.545824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.131 [2024-07-16 00:28:45.545850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.131 qpair failed and we were unable to recover it. 00:24:54.131 [2024-07-16 00:28:45.546007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.546033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.546209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.546234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.546434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.546460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.546630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.546657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.546829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.546854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.547039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.547065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.547237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.547262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.547456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.547481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.547651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.547676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.547824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.547849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.548008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.548033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.548206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.548231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.548399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.548424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.548592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.548618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.548817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.548843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.548990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.549016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.549195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.549225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.549397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.549423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.549566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.549592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.549773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.549799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.549971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.549997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.550149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.550175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.550356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.550382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.550540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.550565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.550738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.550764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.550943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.550969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.551147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.551172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.551328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.551354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.551528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.551553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.551727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.551752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.551929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.551955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.552131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.552157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.552308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.552334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.552529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.552555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.552726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.552751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.552910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.552936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.553109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.553134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.553277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.553302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.553480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.553505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.553680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.553705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.132 [2024-07-16 00:28:45.553872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.132 [2024-07-16 00:28:45.553902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.132 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.554049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.554075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.554244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.554269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.554452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.554479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.554622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.554648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.554821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.554846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.555026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.555052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.555223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.555248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.555423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.555449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.555671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.555696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.555849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.555887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.556064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.556090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.556266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.556292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.556469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.556495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.556699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.556725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.556939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.556965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.557139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.557169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.557320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.557347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.557503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.557529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.557729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.557755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.557926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.557952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.558132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.558158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.558345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.558370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.558544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.558569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.558744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.558770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.558940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.558966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.559143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.559168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.559321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.559347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.559523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.559548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.559715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.559740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.559921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.559948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.560112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.560138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.560314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.560340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.560509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.560535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.560733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.560758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.560915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.560941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.561122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.561148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.561346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.561371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.561527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.561554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.561728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.561753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.561927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.561952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.133 [2024-07-16 00:28:45.562151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.133 [2024-07-16 00:28:45.562176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.133 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.562369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.562394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.562572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.562599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.562800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.562825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.562994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.563020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.563163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.563188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.563361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.563387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.563568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.563594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.563769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.563795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.563946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.563972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.564120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.564146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.564295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.564322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.564524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.564550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.564700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.564726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.564883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.564909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.565084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.565109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.565287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.565312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.565466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.565491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.565686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.565712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.565885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.565911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.566086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.566111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.566290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.566316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.566493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.566519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.566694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.566719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.566898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.566924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.567078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.567106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.567255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.567280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.567454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.567491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.567658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.567684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.567841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.567868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.568055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.568081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.568233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.568258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.568430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.568455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.568629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.568654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.568810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.568836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.568991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.569017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.569194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.569220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.569394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.569419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.569576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.569601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.569772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.569797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.134 qpair failed and we were unable to recover it. 00:24:54.134 [2024-07-16 00:28:45.569971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.134 [2024-07-16 00:28:45.569997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.570185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.570212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.570409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.570439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.570616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.570641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.570788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.570814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.570968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.570994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.571168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.571194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.571372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.571554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.571580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.571750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.571775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.571943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.571968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.572107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.572133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.572308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.572334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.572532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.572557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.572710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.572737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.572917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.572943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.573120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.573145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.573297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.573323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.573509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.573534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.573710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.573736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.573882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.573908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.574086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.574111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.574287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.574312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.574459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.574485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.574660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.574686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.574832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.574857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.575011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.575037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.575195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.575221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.575398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.575423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.575603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.575629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.575803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.575829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.575980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.576007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.576174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.576199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.576406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.576431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.135 [2024-07-16 00:28:45.576588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.135 [2024-07-16 00:28:45.576613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.135 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.576784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.576809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.576986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.577013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.577184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.577210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.577357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.577383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.577528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.577554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.577725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.577751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.577915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.577942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.578118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.578149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.578308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.578335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.578480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.578506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.578654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.578681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.578821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.578847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.579000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.579026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.579199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.579224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.579397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.579422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.579567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.579593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.579742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.579767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.579945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.579971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.580141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.580167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.580341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.580367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.580536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.580561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.580719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.580745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.580925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.580952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.581125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.581151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.581316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.581341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.581491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.581517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.581719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.581745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.581902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.581929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.582097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.582123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.582271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.582297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.582473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.582498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.582638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.582664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.582826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.582852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.583007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.583033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.583207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.583233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.583377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.583403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.583549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.583574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.583717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.583742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.583893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.136 [2024-07-16 00:28:45.583919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.136 qpair failed and we were unable to recover it. 00:24:54.136 [2024-07-16 00:28:45.584090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.584116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.584313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.584339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.584522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.584547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.584718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.584744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.584901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.584928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.585127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.585152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.585323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.585349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.585524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.585550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.585717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.585746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.585903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.585930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.586104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.586129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.586281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.586307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.586524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.586549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.586697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.586722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.586870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.586905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.587075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.587100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.587242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.587267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.587468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.587493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.587670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.587695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.587864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.587894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.588072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.588097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.588247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.588273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.588431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.588457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.588602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.588784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.588809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.588979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.589005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.589177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.589203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.589353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.589378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.589545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.589571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.589719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.589744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.589894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.589920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.590094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.590119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.590299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.590324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.590493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.590519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.590660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.590685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.590887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.590913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.591055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.591081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.591262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.591287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.591463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.591488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.591660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.591686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.591847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.137 [2024-07-16 00:28:45.591874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.137 qpair failed and we were unable to recover it. 00:24:54.137 [2024-07-16 00:28:45.592061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.592087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.592262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.592288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.592437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.592462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.592634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.592660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.592860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.592890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.593047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.593072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.593223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.593249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.593394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.593424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.593578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.593604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.593750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.593776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.593934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.593961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.594140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.594166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.594343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.594370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.594525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.594551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.594722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.594747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.594925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.594951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.595098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.595124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.595296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.595322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.595465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.595490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.595662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.595687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.595842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.595868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.596029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.596055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.596206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.596232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.596374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.596400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.596578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.596603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.596743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.596768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.596967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.596993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.597152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.597177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.597346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.597371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.597520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.597547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.597714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.597739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.597885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.597911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.598060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.598086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.598236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.598262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.598444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.598470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.598624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.598649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.598823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.598849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.599027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.599053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.599232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.599258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.599478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.599504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.138 [2024-07-16 00:28:45.599709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.138 [2024-07-16 00:28:45.599734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.138 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.599884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.599910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.600055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.600081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.600258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.600284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.600430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.600458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.600609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.600634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.600806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.600832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.601007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.601039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.601212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.601237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.601412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.601438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.601588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.601614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.601768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.601794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.601967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.601993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.602168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.602194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.602341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.602367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.602543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.602568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.602735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.602760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.602913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.602941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.603114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.603140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.603295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.603320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.603473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.603499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.603681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.603706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.603883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.603909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.604056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.604081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.604287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.604313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.604465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.604491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.604668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.604694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.604862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.604893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.605039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.605064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.605215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.605241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.605414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.605440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.605617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.605642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.605811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.605837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.606014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.606041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.606224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.606251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.606416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.606595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.606621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.606776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.606802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.606976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.607003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.607150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.607176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.607352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.607378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.139 qpair failed and we were unable to recover it. 00:24:54.139 [2024-07-16 00:28:45.607523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.139 [2024-07-16 00:28:45.607550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.607725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.607751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.607927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.607953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.608123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.608148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.608296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.608322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.608498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.608524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.608698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.608728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.608912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.608938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.609088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.609113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.609258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.609284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.609428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.609453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.609630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.609656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.609827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.609853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.610036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.610063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.610241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.610267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.610470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.610495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.610642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.610668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.610840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.610866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.611046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.611071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.611243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.611269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.611449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.611475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.611665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.611690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.611892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.611929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.612089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.612115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.612260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.612285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.612431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.612456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.612607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.612634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.612808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.612834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.613050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.613076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.613249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.613275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.613411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.613436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.613613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.613638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.613781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.613806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.613991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.614018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.614182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.140 [2024-07-16 00:28:45.614208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.140 qpair failed and we were unable to recover it. 00:24:54.140 [2024-07-16 00:28:45.614382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.614407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.614588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.614614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.614789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.614816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.614966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.614993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.615174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.615199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.615353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.615379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.615522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.615548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.615724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.615750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.615928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.615954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.616154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.616180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.616332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.616359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.616569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.616599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.616745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.616771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.616961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.616996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.617153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.617179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.617347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.617372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.617520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.617547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.617702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.617727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.617872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.617903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.618104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.618129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.618278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.618305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.618510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.618535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.618706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.618731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.618908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.618934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.619090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.619115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.619292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.619318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.619485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.619511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.619689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.619716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.619902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.619929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.620128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.620153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.620332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.620358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.620522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.620548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.620746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.620772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.620950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.620976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.621131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.621157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.621335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.621362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.621567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.621593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.621767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.621793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.621971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.621997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.622170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.622196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.141 [2024-07-16 00:28:45.622336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.141 [2024-07-16 00:28:45.622361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.141 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.622561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.622587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.622787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.622813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.622996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.623022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.623166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.623191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.623371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.623397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.623573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.623599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.623802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.623828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.623985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.624011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.624185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.624210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.624388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.624414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.624614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.624643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.624844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.624870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.625047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.625073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.625245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.625271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.625470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.625498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.625674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.625700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.625841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.625867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.626058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.626083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.626256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.626282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.626433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.626458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.626600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.626627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.626801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.626826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.626975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.627001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.627151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.627176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.627380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.627406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.627558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.627584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.627757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.627782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.627954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.627981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.628137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.628164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.628312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.628338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.628538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.628563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.628704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.628730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.628883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.628909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.629053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.629079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.629282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.629307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.629464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.629489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.629667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.629693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.629849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.629881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.630052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.630078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.630250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.142 [2024-07-16 00:28:45.630275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.142 qpair failed and we were unable to recover it. 00:24:54.142 [2024-07-16 00:28:45.630417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.630442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.630616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.630642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.630837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.630862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.631027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.631053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.631228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.631254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.631421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.631447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.631600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.631626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.631766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.631792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.631992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.632018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.632170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.632195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.632400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.632430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.632611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.632636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.632834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.632859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.633034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.633059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.633205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.633230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.633435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.633460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.633610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.633635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.633832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.633857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.634059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.634085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.634239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.634265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.634411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.634437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.634588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.634614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.634784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.634809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.634985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.635011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.635192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.635218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.635367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.635393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.635569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.635595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.635776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.635801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.635978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.636004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.636178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.636203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.636390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.636415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.636586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.636611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.636757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.636783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.636995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.637021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.637208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.637233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.637404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.637430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.637612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.637637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.637791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.637817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.637996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.638023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.638195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.638221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.638401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.143 [2024-07-16 00:28:45.638427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.143 qpair failed and we were unable to recover it. 00:24:54.143 [2024-07-16 00:28:45.638598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.638624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.638770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.638797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.638948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.638974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.639154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.639179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.639329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.639355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.639525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.639550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.639729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.639754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.639933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.639959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.640115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.640140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.640322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.640352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.640540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.640566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.640735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.640761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.640915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.640941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.641122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.641147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.641328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.641354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.641525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.641551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.641747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.641772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.641972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.641998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.642168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.642193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.642377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.642402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.642578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.642603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.642777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.642803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.642962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.642988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.643195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.643221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.643392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.643417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.643563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.643588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.643761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.643786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.643990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.644016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.644189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.644215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.644359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.644386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.644563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.644589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.644789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.644815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.644974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.645001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.645180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.645205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.645381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.645407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.144 qpair failed and we were unable to recover it. 00:24:54.144 [2024-07-16 00:28:45.645577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.144 [2024-07-16 00:28:45.645602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.645804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.645830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.645987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.646013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.646186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.646211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.646414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.646439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.646613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.646640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.646815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.646840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.647047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.647073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.647226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.647251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.647437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.647463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.647665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.647690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.647833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.647859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.648041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.648066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.648235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.648261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.648446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.648476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.648652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.648677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.648852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.648882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.649063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.649088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.649238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.649264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.649451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.649476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.649677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.649702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.649844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.649870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.650059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.650085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.650237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.650263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.650418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.650444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.650587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.650612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.650782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.650807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.650982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.651008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.651190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.651216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.651393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.651419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.651567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.651592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.651765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.651791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.651946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.651973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.652126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.652152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.652324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.652350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.652525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.652550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.652698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.652725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.652880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.652907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.653087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.653113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.653313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.653338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.653544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.145 [2024-07-16 00:28:45.653570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.145 qpair failed and we were unable to recover it. 00:24:54.145 [2024-07-16 00:28:45.653721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.653747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.653928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.653954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.654122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.654148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.654324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.654350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.654505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.654530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.654705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.654731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.654906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.654932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.655082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.655108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.655279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.655305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.655449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.655474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.655619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.655646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.655818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.655844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.656002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.656028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.656198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.656227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.656372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.656397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.656556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.656581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.656733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.656758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.656937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.656962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.657156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.657181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.657328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.657354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.657533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.657558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.657709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.657734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.657903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.657929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.658130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.658156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.658358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.658384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.658530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.658555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.658742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.658768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.658943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.658970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.659157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.659183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.659357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.659382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.659560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.659585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.659766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.659791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.659964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.659990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.660147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.660173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.660342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.660367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.660516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.660542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.660740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.660766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.660938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.660964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.661163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.661189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.661331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.661357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.661543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.146 [2024-07-16 00:28:45.661568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.146 qpair failed and we were unable to recover it. 00:24:54.146 [2024-07-16 00:28:45.661711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.661737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.661893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.661919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.662097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.662123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.662300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.662325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.662500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.662525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.662724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.662750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.662931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.662957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.663142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.663167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.663319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.663344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.663515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.663540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.663712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.663738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.663894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.663920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.664071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.664098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.664277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.664302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.664477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.664503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.664681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.664706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.664886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.664912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.665106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.665131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.665300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.665325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.665472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.665498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.665670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.665695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.665859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.665889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.666068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.666094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.666243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.666269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.666450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.666475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.666625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.666651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.666830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.666856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.667037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.667062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.667235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.667260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.667424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.667449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.667604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.667631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.667834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.667859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.668017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.668044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.668231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.668256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.668428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.668454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.668606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.668632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.668772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.668797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.668967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.668993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.669166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.669192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.669343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.669373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.669529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.669554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.669729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.669755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.669937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.147 [2024-07-16 00:28:45.669964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.147 qpair failed and we were unable to recover it. 00:24:54.147 [2024-07-16 00:28:45.670168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.670194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.670369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.670394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.670568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.670594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.670771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.670797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.670966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.670992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.671135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.671160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.671298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.671324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.671575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.671600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.671804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.671829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.671981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.672007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.672166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.672191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.672366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.672391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.672560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.672586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.672738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.672763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.672932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.672958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.673100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.673125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.673299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.673324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.673476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.673503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.673650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.673676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.673831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.673857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.674064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.674090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.674284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.674310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.674482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.674508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.674691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.674717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.674855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.674886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.675037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.675064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.675242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.675268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.675448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.675473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.675615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.675641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.675786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.675812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.675956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.675982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.676160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.676185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.676341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.676366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.676565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.676591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.676767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.676793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.676971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.676998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.677195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.677224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.677402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.677428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.148 qpair failed and we were unable to recover it. 00:24:54.148 [2024-07-16 00:28:45.677580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.148 [2024-07-16 00:28:45.677606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.677807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.677832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.677985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.678011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.678161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.678187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.678335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.678360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.678562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.678588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.678759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.678784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.678926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.678952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.679159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.679185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.679391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.679417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.679593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.679618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.679765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.679790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.679992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.680019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.680174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.680200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.680377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.680403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.680575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.680601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.680772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.680798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.680946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.680973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.681176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.681201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.681377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.681404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.681579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.681605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.681786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.681812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.681990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.682016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.682169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.682194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.682341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.682367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.682548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.682574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.682745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.682771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.682916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.682943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.683143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.683168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.683344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.683369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.683514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.683540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.683716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.683742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.683914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.683940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.684086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.684112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.684263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.684289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.684465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.684490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.684665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.684691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.684836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.684861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.685017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.685047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.685216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.685242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.685390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.685416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.685560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.685585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.685761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.685786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.685936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.685962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.149 [2024-07-16 00:28:45.686137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.149 [2024-07-16 00:28:45.686163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.149 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.686332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.686358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.686531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.686556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.686734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.686760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.686896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.686922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.687097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.687123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.687311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.687337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.687512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.687538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.687717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.687742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.687899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.687927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.688099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.688124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.688299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.688325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.688478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.688504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.688657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.688683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.688890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.688916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.689121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.689147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.689323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.689348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.689519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.689545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.689694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.689719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.689908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.689935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.690118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.690144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.690350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.690375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.690554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.690579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.690753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.690780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.690958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.690984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.691170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.691195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.691370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.691395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.691574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.691600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.691787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.691812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.692012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.692037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.692181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.692207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.692355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.692380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.692552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.692577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.692776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.692802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.692951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.692985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.693164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.693189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.693363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.693388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.693590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.693616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.693792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.693818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.693989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.694015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.694190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.694216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.694417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.694443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.694626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.694651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.694851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.150 [2024-07-16 00:28:45.694885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.150 qpair failed and we were unable to recover it. 00:24:54.150 [2024-07-16 00:28:45.695041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.695067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.695241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.695267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.695446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.695471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.695616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.695641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.695843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.695868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.696032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.696057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.696255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.696280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.696437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.696463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.696643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.696669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.696865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.696896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.697046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.697071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.697247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.697273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.697448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.697474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.697646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.697672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.697867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.697898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.698043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.698069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.698269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.698295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.698467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.698493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.698648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.698673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.698850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.698880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.699024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.699049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.699200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.699225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.699404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.699430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.699605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.699630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.699809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.699834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.700010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.700037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.700186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.700211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.700388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.700414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.700590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.700616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.700788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.700813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.700980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.701010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.701167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.701193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.701364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.701390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.701532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.701557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.701732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.701757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.701957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.701983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.702162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.702188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.702355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.702380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.702558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.702584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.702736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.702761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.702949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.702975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.703117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.703144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.703326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.703351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.703521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.703546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.151 qpair failed and we were unable to recover it. 00:24:54.151 [2024-07-16 00:28:45.703728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.151 [2024-07-16 00:28:45.703754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.703934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.703960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.704148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.704174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.704327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.704353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.704507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.704534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.704712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.704739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.704908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.704934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.705133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.705159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.705363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.705389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.705538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.705564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.705736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.705762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.705931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.705957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.706143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.706169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.706369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.706395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.706546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.706573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.706720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.706746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.706898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.706925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.707097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.707123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.707280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.707306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.707455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.707480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.707643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.707668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.707840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.707866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.708061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.708087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.708239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.708265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.708443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.708469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.708646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.708671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.708850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.708885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.709071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.709097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.709307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.709333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.709508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.709534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.709707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.709732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.709907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.709933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.710074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.710100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.710254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.710280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.710421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.710446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.710599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.710625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.710827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.710853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.710994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.711020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.152 qpair failed and we were unable to recover it. 00:24:54.152 [2024-07-16 00:28:45.711191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.152 [2024-07-16 00:28:45.711217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.711415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.711441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.711622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.711648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.711833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.711858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.712034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.712060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.712245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.712270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.712422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.712447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.712649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.712674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.712891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.712923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.713140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.713167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.713347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.713372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.713550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.713575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.713718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.713743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.713920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.713946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.714121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.714146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.714343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.714368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.714520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.714547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.153 qpair failed and we were unable to recover it. 00:24:54.153 [2024-07-16 00:28:45.714725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.153 [2024-07-16 00:28:45.714751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.434 qpair failed and we were unable to recover it. 00:24:54.434 [2024-07-16 00:28:45.714891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.714917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.715089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.715115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.715260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.715285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.715434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.715460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.715602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.715628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.715845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.715871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.716075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.716101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.716276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.716456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.716482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.716661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.716687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.716862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.716898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.717106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.717132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.717289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.717314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.717486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.717511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.717711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.717737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.717890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.717916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.718068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.718093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.718239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.718266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.718417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.718443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.718617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.718642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.718820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.718846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.719004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.719030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.719177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.719203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.719381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.719407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.719566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.719592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.719766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.719791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.719935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.719961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.720151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.720177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.720326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.720352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.720554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.720579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.720738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.720764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.720937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.720974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.721163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.721189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.721371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.721396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.721548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.721574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.721753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.721778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.721949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.721975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.722124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.722150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.722328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.722355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.722535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.722561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.435 [2024-07-16 00:28:45.722749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.435 [2024-07-16 00:28:45.722774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.435 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.722946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.722973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.723119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.723145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.723309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.723334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.723503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.723529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.723676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.723702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.723913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.723939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.724093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.724119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.724279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.724304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.724456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.724481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.724650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.724679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.724889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.724914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.725091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.725117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.725290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.725316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.725492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.725517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.725694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.725719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.725863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.725897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.726049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.726074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.729063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.729103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.729293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.729320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.729477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.729503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.729686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.729712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.729892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.729919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.730071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.730096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.730255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.730280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.730454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.730480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.730659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.730685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.730838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.730863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.731041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.731067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.731220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.731245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.731420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.731445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.731593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.731619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.731771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.731796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.731998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.732024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.732177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.732202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.732406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.732431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.732634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.732659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.732864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.732895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.733046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.733072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.733210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.733235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.733437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.733463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.436 qpair failed and we were unable to recover it. 00:24:54.436 [2024-07-16 00:28:45.733643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.436 [2024-07-16 00:28:45.733670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.733881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.733907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.734107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.734133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.734307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.734332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.734511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.734536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.734716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.734742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.734950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.734976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.735151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.735178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.735358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.735384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.735560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.735592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.735736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.735762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.735919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.735945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.736098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.736123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.736322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.736348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.736518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.736543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.736695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.736720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.736923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.736949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.737104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.737129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.737305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.737331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.737482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.737507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.737763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.737789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.737938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.737964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.738142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.738169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.738350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.738377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.738629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.738655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.738810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.738836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.739014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.739040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.739254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.739279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.739491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.739516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.739663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.739688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.739844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.739870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.740051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.740077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.740273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.740299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.740502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.740528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.740680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.740706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.740884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.740910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.741087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.741112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.741290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.741316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.741518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.741544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.741699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.741724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.741904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.437 [2024-07-16 00:28:45.741931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.437 qpair failed and we were unable to recover it. 00:24:54.437 [2024-07-16 00:28:45.742108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.742134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.742311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.742337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.742535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.742561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.742711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.742737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.742911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.742938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.743080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.743106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.743277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.743304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.743462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.743487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.743659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.743688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.743859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.743898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.744074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.744100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.744253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.744278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.744451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.744477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.744654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.744679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.744881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.744908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.745083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.745108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.745282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.745308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.745485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.745510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.745693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.745718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.745898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.745924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.746073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.746099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.746296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.746322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.746493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.746519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.746699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.746725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.746913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.746939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.747084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.747110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.747311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.747337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.747543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.747569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.747744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.747771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.747974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.748000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.748172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.748198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.748352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.748377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.748562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.748588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.748762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.748789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.748941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.748967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.749161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.749186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.749366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.749392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.749548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.749574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.749742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.749767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.749917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.749943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.438 [2024-07-16 00:28:45.750119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.438 [2024-07-16 00:28:45.750144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.438 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.750342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.750367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.750514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.750540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.750715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.750741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.750921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.750948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.751122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.751148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.751326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.751353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.751522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.751548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.751702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.751732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.751936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.751963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.752132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.752158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.752332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.752359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.752526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.752551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.752688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.752714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.752863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.752894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.753071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.753097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.753271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.753297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.753463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.753489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.753638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.753664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.753811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.753837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.754051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.754078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.754258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.754284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.754459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.754484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.754658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.754684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.754839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.754865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.755044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.755070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.755271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.755297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.755472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.755498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.755692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.755718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.755916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.755941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.756148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.756174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.756376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.756402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.756600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.756625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.756774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.439 [2024-07-16 00:28:45.756800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.439 qpair failed and we were unable to recover it. 00:24:54.439 [2024-07-16 00:28:45.757007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.757034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.757183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.757209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.757404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.757430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.757604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.757630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.757817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.757843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.758030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.758056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.758199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.758225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.758425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.758451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.758591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.758617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.758791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.758816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.759004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.759030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.759205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.759232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.759384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.759410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.759582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.759608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.759773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.759803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.760007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.760033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.760211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.760237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.760415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.760440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.760613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.760639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.760807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.760833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.760979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.761005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.761185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.761211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.761363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.761390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.761595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.761621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.761773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.761799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.761949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.761976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.762151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.762176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.762357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.762383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.762573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.762599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.762771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.762796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.762996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.763022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.763201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.763226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.763373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.763398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.763540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.763567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.763726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.763752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.763919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.763945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.764092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.764119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.764270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.764295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.764483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.764508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.764681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.764707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.764861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.440 [2024-07-16 00:28:45.764893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.440 qpair failed and we were unable to recover it. 00:24:54.440 [2024-07-16 00:28:45.765076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.765103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.765285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.765311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.765512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.765538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.765714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.765740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.765927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.765953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.766140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.766166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.766344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.766370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.766550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.766576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.766724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.766750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.766899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.766925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.767105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.767130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.767307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.767333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.767501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.767527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.767700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.767725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.767892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.767918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.768086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.768112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.768303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.768329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.768495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.768520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.768672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.768697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.768870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.768902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.769074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.769100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.769259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.769284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.769460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.769486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.769688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.769714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.769888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.769914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.770066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.770091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.770244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.770270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.770476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.770502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.770705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.770731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.770881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.770908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.771111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.771136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.771282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.771309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.771465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.771492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.771674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.771699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.771849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.771875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.772062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.772088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.772267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.772293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.772467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.772493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.772659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.772684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.772858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.772888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.773031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.441 [2024-07-16 00:28:45.773061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.441 qpair failed and we were unable to recover it. 00:24:54.441 [2024-07-16 00:28:45.773263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.773288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.773443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.773468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.773614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.773640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.773844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.773869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.774049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.774075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.774255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.774280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.774462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.774487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.774633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.774659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.774812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.774838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.775026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.775052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.775248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.775274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.775422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.775448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.775604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.775630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.775793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.775819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.775998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.776024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.776173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.776199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.776412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.776437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.776607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.776633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.776833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.776859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.777027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.777053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.777222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.777248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.777452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.777478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.777629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.777654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.777825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.777852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.778012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.778038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.778191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.778216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.778399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.778425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.778601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.778626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.778805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.778830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.779008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.779034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.779183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.779209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.779381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.779406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.779613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.779639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.779816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.779841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.779997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.780024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.780166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.780192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.780360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.780386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.780582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.780607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.780759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.780784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.442 qpair failed and we were unable to recover it. 00:24:54.442 [2024-07-16 00:28:45.780963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.442 [2024-07-16 00:28:45.780993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.781143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.781169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.781347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.781372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.781535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.781560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.781717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.781744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.781911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.781938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.782090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.782116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.782290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.782315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.782492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.782518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.782671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.782697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.782847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.782873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.783058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.783083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.783234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.783260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.783440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.783465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.783649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.783675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.783829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.783855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.784009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.784036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.784212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.784237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.784407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.784432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.784605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.784631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.784819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.784845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.785040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.785067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.785218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.785244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.785396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.785421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.785596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.785621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.785797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.785822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.785986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.786013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.786192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.786218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.786394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.786419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.786563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.786588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.786744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.786769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.786923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.786949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.787157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.787182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.787360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.787386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.443 [2024-07-16 00:28:45.787585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.443 [2024-07-16 00:28:45.787610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.443 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.787796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.787821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.787968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.787994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.788198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.788224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.788397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.788423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.788576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.788601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.788781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.788811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.789000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.789026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.789226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.789252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.789422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.789447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.789624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.789651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.789831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.789857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.790050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.790076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.790257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.790283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.790432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.790458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.790645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.790671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.790858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.790888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.791045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.791071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.791217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.791244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.791421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.791446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.791607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.791632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.791813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.791839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.792043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.792069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.792267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.792293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.792466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.792492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.792647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.792673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.792823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.792849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.793041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.793067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.793244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.793270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.793423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.793449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.793648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.793673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.793821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.793846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.444 qpair failed and we were unable to recover it. 00:24:54.444 [2024-07-16 00:28:45.794028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.444 [2024-07-16 00:28:45.794056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.794213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.794238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.794407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.794433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.794608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.794634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.794811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.794837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.795049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.795076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.795252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.795277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.795435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.795460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.795634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.795659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.795799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.795825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.795978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.796004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.796203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.796229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.796407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.796432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.796587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.796613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.796789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.796818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.796994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.797020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.797197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.797222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.797393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.797418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.797592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.797617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.797790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.797816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.797993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.798019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.798205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.798230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.798430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.798456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.798631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.798657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.798858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.798889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.799090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.799115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.799262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.799287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.799462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.799487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.799646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.799673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.799831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.799856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.800014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.800040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.800195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.800221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.800391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.800417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.800619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.800645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.800823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.800848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.801038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.801065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.801246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.801273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.801422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.801448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.801620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.801645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.801821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.801847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.445 qpair failed and we were unable to recover it. 00:24:54.445 [2024-07-16 00:28:45.802031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.445 [2024-07-16 00:28:45.802057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.802262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.802287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.802487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.802512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.802661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.802688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.802895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.802921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.803096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.803122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.803276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.803301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.803480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.803505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.803674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.803699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.803882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.803908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.804110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.804135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.804301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.804327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.804508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.804532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.804736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.804761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.804940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.804970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.805115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.805140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.805318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.805344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.805521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.805547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.805701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.805726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.805872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.805902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.806102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.806128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.806306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.806332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.806515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.806541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.806711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.806736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.806920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.806946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.807099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.807126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.807296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.807321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.807509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.807535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.807742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.807768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.807945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.807971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.808118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.808143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.808320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.446 [2024-07-16 00:28:45.808347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.446 qpair failed and we were unable to recover it. 00:24:54.446 [2024-07-16 00:28:45.808517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.808543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.808719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.808745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.808926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.808956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.809109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.809135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.809308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.809333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.809502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.809528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.809674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.809700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.809873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.809905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.810072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.810099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.810303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.810329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.810502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.810528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.810726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.810752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.810908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.810935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.811113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.811138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.811317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.811343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.811517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.811543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.811711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.811737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.811912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.811938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.812119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.812144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.812320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.812346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.812552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.812578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.812748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.812773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.812971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.813002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.813173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.813198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.813365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.813390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.813550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.813576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.813772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.813797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.813967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.813992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.814149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.814176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.814382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.814408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.814583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.814609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.814790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.814815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.814986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.815012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.815186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.815212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.815368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.815393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.815566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.815592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.815800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.815826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.816010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.816036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.816179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.816204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.447 [2024-07-16 00:28:45.816357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.447 [2024-07-16 00:28:45.816384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.447 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.816554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.816579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.816751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.816778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.816957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.816983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.817153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.817179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.817334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.817360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.817511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.817537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.817689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.817716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.817915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.817942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.818093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.818121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.818296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.818322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.818505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.818537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.818728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.818754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.818902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.818928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.819081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.819108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.819256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.819282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.819458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.819494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.819674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.819700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.819845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.819871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.820045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.820071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.820248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.820273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.820448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.820474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.820672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.820697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.820894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.820925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.821108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.821135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.821306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.821332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.448 [2024-07-16 00:28:45.821529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.448 [2024-07-16 00:28:45.821555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.448 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.821733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.821770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.821936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.821963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.822140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.822166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.822316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.822353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.822551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.822578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.822785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.822810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.822989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.823016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.823196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.823223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.823401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.823428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.823601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.823627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.823784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.823821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.823983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.824009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.824196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.824225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.824377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.824413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.824563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.824589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.824761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.824787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.824930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.824975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.825164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.825190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.825365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.825391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.825567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.825592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.825766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.825803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.825960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.825987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.826169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.826195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.826371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.826397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.826545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.826570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.826722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.826754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.826929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.826955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.827158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.827184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.827362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.827387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.827562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.827589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.827760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.827787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.827945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.827971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.828115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.828140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.828343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.828369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.828511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.828536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.828705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.828730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.828886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.828918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.829061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.829087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.829261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.449 [2024-07-16 00:28:45.829286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.449 qpair failed and we were unable to recover it. 00:24:54.449 [2024-07-16 00:28:45.829478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.829512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.829671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.829703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.829895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.829922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.830077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.830103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.830254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.830281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.830436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.830463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.830635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.830661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.830834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.830860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.831010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.831037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.831214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.831240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.831435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.831461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.831617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.831644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.831808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.831841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.832016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.832043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.832216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.832243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.832417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.832443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.832624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.832650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.832806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.832832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.832973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.833000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.833139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.833165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.833345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.833372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.833568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.833595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.833750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.833776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.833926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.833953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.834134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.834160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.834311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.834338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.834517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.834543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.834693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.834719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.834867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.834902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.835055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.835081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.450 [2024-07-16 00:28:45.835280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.450 [2024-07-16 00:28:45.835305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.450 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.835476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.835501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.835678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.835705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.835869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.835912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.836068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.836095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.836264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.836290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.836469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.836495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.836643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.836689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.836893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.836920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.837128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.837153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.837349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.837375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.837552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.837589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.837785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.837812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.837970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.837997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.838173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.838200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.838409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.838435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.838583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.838609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.838782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.838807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.838968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.838995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.839175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.839201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.839356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.839391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.839544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.839576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.839756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.839781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.839932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.839958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.840114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.840147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.840302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.840328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.840480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.840507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.840684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.840711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.840887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.840915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.841082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.841109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.841290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.841316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.841460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.841487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.841629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.841655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.841830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.841855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.842032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.842073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.842240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.842268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.842449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.842475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.842628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.842654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.842859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.842894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.843051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.843078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.451 qpair failed and we were unable to recover it. 00:24:54.451 [2024-07-16 00:28:45.843232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.451 [2024-07-16 00:28:45.843258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.843402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.843428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.843577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.843604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.843776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.843802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.843979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.844006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.844155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.844181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.844357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.844384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.844562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.844599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.844757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.844783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.844989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.845015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.845212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.845238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.845413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.845438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.845581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.845607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.845760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.845786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.845967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.845993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.846166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.846192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.846369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.846395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.846545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.846571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.846742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.846768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.846945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.846971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.847118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.847144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.847328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.847355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.847497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.847523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.847662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.847688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.847841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.847866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.848056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.848082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.848241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.848267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.848442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.848468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.848670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.848696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.848885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.848912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.849070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.849096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.849242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.849269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.849444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.849470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.849654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.849683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.849868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.849901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.850053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.850079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.850256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.850281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.850428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.850465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.850627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.850653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.850799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.850825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.850984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.452 [2024-07-16 00:28:45.851011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.452 qpair failed and we were unable to recover it. 00:24:54.452 [2024-07-16 00:28:45.851158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.851185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.851385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.851411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.851589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.851615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.851787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.851813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.851959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.851985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.852136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.852163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.852317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.852348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.852523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.852549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.852725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.852751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.852927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.852954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.853099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.853125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.853300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.853325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.853500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.853527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.853678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.853704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.853883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.853910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.854086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.854112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.854262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.854288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.854441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.854466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.854606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.854632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.854837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.854863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.855030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.855057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.855232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.855258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.855412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.855438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.855586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.855611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.855788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.855815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.855998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.856025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.856178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.856204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.856408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.856434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.856610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.856646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.856809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.856834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.857018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.857044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.857193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.857220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.857385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.857411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.857562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.857587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.453 [2024-07-16 00:28:45.857775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.453 [2024-07-16 00:28:45.857801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.453 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.857973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.858000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.858173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.858200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.858375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.858401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.858573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.858598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.858750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.858776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.858953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.858979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.859125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.859151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.859318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.859344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.859527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.859554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.859734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.859760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.859935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.859961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.860140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.860173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.860321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.860346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.860515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.860540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.860685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.860711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.860914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.860940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.861085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.861112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.861254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.861280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.861462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.861489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.861662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.861688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.861856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.861888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.862033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.862058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.862212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.862237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.862415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.862440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.862588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.862613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.862760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.862786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.862951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.862978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.863154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.863179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.863356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.863386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.863620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.863647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.863790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.863816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.863964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.863992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.864142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.864167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.864310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.864335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.864508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.864534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.864674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.864700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.864886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.864912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.865077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.865103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.865264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.865303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.865494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.865521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.454 qpair failed and we were unable to recover it. 00:24:54.454 [2024-07-16 00:28:45.865668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.454 [2024-07-16 00:28:45.865694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.865881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.865909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.866054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.866081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.866237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.866263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.866411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.866437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.866614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.866640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.866822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.866849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.867002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.867028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.867230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.867256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.867430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.867457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.867632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.867658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.867817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.867848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.868027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.868054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.868206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.868233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.868387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.868413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.868568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.868594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.868773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.868800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.868973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.868999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.869148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.869174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.869382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.869409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.869585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.869611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.869767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.869793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.869947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.869974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.870122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.870149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.870302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.870328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.870485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.870511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.870660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.870685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.870827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.870854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.871020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.871048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.871203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.871229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.455 qpair failed and we were unable to recover it. 00:24:54.455 [2024-07-16 00:28:45.871414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.455 [2024-07-16 00:28:45.871451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.871609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.871635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.871806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.871832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.871981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.872007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.872176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.872202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.872377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.872402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.872571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.872597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.872802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.872828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.872975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.873006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.873147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.873173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.873327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.873353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.873527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.873552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.873732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.873759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.873914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.873941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.874118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.874144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.874293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.874319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.874507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.874534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.874703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.874729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.874886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.874912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.875093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.875121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.875302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.875328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.875481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.875506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.875684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.875710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.875855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.875887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.876044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.876069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.876216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.876242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.876416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.876442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.456 [2024-07-16 00:28:45.876593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.456 [2024-07-16 00:28:45.876618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.456 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.876767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.876793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.876967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.876993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.877173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.877199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.877357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.877384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.877560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.877586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.877725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.877751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.877907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.877934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.878094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.878121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.878277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.878304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.878455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.878481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.878686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.878712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.878863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.878896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.879071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.879098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.879273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.879299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.879443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.879469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.879620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.879647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.879820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.879846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.880011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.880038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.880212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.880238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.880409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.880435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.880634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.880665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.880871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.880905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.881080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.881106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.881286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.881313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.881461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.881487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.881639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.881667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.881817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.881844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.457 [2024-07-16 00:28:45.882007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.457 [2024-07-16 00:28:45.882036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.457 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.882208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.882234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.882382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.882408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.882586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.882611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.882759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.882784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.882959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.882985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.883137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.883164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.883340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.883368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.883513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.883538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.883688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.883714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.883900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.883927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.884081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.884107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.884249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.884276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.884446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.884471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.884620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.884645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.884812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.884837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.884983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.885009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.885188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.885214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.885363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.885389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.885562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.885587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.885740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.885766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.885946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-07-16 00:28:45.885973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.458 qpair failed and we were unable to recover it. 00:24:54.458 [2024-07-16 00:28:45.886154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.886180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.886326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.886353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.886511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.886537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.886685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.886711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.886891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.886917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.887071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.887096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.887253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.887278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.887450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.887475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.887626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.887651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.887835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.887861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.888042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.888069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.888234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.888264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.888437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.888463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.888618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.888644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.888788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.888813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.888961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.888988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.889166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.889192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.889367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.889393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.889616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.889642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.889817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.889843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.889994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.890021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.890172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.890198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.890343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.890369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.890539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.890565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.890732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.890759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.890947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.890974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.891146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.891172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.459 qpair failed and we were unable to recover it. 00:24:54.459 [2024-07-16 00:28:45.891345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.459 [2024-07-16 00:28:45.891371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.891543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.891569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.891727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.891752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.891919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.891946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.892095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.892120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.892280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.892306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.892448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.892473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.892624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.892650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.892824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.892849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.893011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.893037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.893189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.893215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.893369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.893395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.893569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.893597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.893743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.893768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.893942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.893969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.894120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.894145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.894341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.894366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.894510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.894535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.894707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.894733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.894912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.894938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.895088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.895113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.895286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.895312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.895462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.895488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.895662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.895689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.895893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.895923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.896074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.896100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.896243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.896269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.460 [2024-07-16 00:28:45.896416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.460 [2024-07-16 00:28:45.896441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.460 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.896612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.896638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.896789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.896816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.896993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.897019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.897197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.897222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.897401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.897427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.897578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.897604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.897753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.897779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.897919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.897946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.898097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.898123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.898266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.898293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.898441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.898468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.898637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.898662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.898848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.898873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.899033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.899059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.899202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.899227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.899381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.899589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.899614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.899792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.899817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.899991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.900016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.900195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.900221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.900393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.900419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.900609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.900638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.900796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.900822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.901003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.901030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.901211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.901238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.901405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.901431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.901610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.461 [2024-07-16 00:28:45.901636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.461 qpair failed and we were unable to recover it. 00:24:54.461 [2024-07-16 00:28:45.901805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.901831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.901972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.901999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.902174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.902200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.902369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.902395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.902574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.902600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.902743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.902769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.902949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.902975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.903142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.903168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.903349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.903376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.903557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.903587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.903745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.903770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.903964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.903990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.904163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.904188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.904359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.904384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.904528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.904695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.904722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.904897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.904924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.905096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.905123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.905331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.905357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.905536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.905562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.905740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.905766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.905947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.905973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.906126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.906151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.906312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.906337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.906478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.906503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.906684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.906710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.906882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.906908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.462 [2024-07-16 00:28:45.907067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.462 [2024-07-16 00:28:45.907096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.462 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.907269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.907295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.907440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.907465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.907635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.907660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.907835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.907860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.908038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.908064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.908262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.908288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.908461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.908488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.908645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.908671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.908819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.908845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.909020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.909046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.909192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.909218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.909391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.909417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.909564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.909598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.909788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.909814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.909963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.909990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.910165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.910191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.910345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.910371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.910510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.910536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.910712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.910739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.910922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.910949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.911126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.911152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.911300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.911333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.911513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.911539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.911684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.911711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.911896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.911922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.912100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.912125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.463 qpair failed and we were unable to recover it. 00:24:54.463 [2024-07-16 00:28:45.912283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.463 [2024-07-16 00:28:45.912319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.912501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.912527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.912677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.912702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.912857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.912890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.913071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.913099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.913273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.913299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.913444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.913469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.913656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.913682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.913857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.913901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.914111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.914137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.914291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.914317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.914467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.914504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.914686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.914713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.914889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.914916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.915084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.915110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.915259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.915294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.915460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.915487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.915662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.915689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.915837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.915863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.916038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.916065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.916242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.916267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.916445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.916473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.464 [2024-07-16 00:28:45.916658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.464 [2024-07-16 00:28:45.916685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.464 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.916831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.916857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.917044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.917078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.917230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.917259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.917448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.917474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.917625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.917652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.917832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.917859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.918037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.918064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.918223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.918250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.918396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.918422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.918576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.918603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.918782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.918808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.918961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.918988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.919131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.919162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.919310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.919336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.919476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.919502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.919653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.919678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.919847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.919872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.920055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.920081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.920230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.920256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.920409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.920434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.920637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.920663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.920811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.920837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.920988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.921014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.921195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.921221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.921421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.921446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.921592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.465 [2024-07-16 00:28:45.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.465 qpair failed and we were unable to recover it. 00:24:54.465 [2024-07-16 00:28:45.921806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.921831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.921979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.922005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.922145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.922171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.922347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.922373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.922541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.922567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.922765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.922790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.922938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.922964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.923115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.923140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.923311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.923337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.923481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.923507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.923685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.923710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.923886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.923912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.924080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.924105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.924283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.924309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.924457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.924483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.924681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.924707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.924888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.924917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.925092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.925118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.925265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.925291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.925459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.925484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.925657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.925683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.925862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.925893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.926072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.926097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.926268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.926293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.926468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.926493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.926663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.926689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.926867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.926904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.466 qpair failed and we were unable to recover it. 00:24:54.466 [2024-07-16 00:28:45.927086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.466 [2024-07-16 00:28:45.927112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.927292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.927318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.927470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.927496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.927648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.927673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.927848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.927874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.928053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.928079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.928232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.928259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.928461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.928487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.928634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.928660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.928840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.928865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.929036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.929063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.929248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.929274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.929449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.929476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.929658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.929684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.929856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.929890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.930070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.930096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.930244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.930270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.930468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.930494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.930636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.930661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.930867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.930899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.931077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.931103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.931281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.931512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.931537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.931714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.931739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.931893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.931919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.932089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.932114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.932270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.932297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.467 [2024-07-16 00:28:45.932472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.467 [2024-07-16 00:28:45.932498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.467 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.932698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.932724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.932894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.932920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.933069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.933095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.933235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.933261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.933473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.933499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.933673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.933698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.933871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.933914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.934088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.934113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.934292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.934318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.934500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.934525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.934700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.934726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.934902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.934933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.935075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.935101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.935301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.935326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.935498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.935524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.935671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.935697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.935883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.935909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.936060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.936086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.936258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.936283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.936486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.936511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.936656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.936683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.936831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.936857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.937038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.937063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.937246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.937271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.937425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.937451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.937634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.937660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.937833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.468 [2024-07-16 00:28:45.937859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.468 qpair failed and we were unable to recover it. 00:24:54.468 [2024-07-16 00:28:45.938085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.938112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.938256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.938282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.938425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.938451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.938597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.938623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.938795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.938820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.938999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.939025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.939209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.939234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.939439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.939464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.939619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.939645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.939795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.939821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.940009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.940036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.940211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.940238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.940412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.940439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.940618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.940644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.940789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.940815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.941025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.941051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.941202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.941228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.941395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.941421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.941568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.941594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.941762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.941788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.941962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.941988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.942171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.942196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.942343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.942368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.942547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.942573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.942749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.942779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.942992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.943019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.943194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.943220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.469 qpair failed and we were unable to recover it. 00:24:54.469 [2024-07-16 00:28:45.943406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.469 [2024-07-16 00:28:45.943432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.943578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.943605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.943763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.943789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.943967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.943993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.944169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.944195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.944348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.944373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.944516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.944543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.944689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.944715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.944935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.944962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.945136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.945169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.945319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.945345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.945523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.945549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.945705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.945731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.945925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.945951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.946102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.946128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.946323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.946349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.946496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.946523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.946678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.946704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.946886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.946912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.947120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.947147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.947314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.947341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.947513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.947540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.947712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.947738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.947898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.947925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.948111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.948137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.948294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.948320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.948494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.948519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.470 [2024-07-16 00:28:45.948728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.470 [2024-07-16 00:28:45.948754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.470 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.948904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.948942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.949122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.949148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.949321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.949347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.949515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.949541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.949714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.949739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.949917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.949944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.950122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.950148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.950323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.950348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.950525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.950551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.950699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.950731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.950894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.950921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.951090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.951115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.951272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.951298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.951438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.951465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.951642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.951668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.951842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.951868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.952049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.952076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.952252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.952278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.952447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.952473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.952633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.952659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.952868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.952899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.471 qpair failed and we were unable to recover it. 00:24:54.471 [2024-07-16 00:28:45.953111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.471 [2024-07-16 00:28:45.953137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.953319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.953344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.953493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.953520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.953698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.953724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.953905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.953931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.954108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.954133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.954288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.954314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.954476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.954503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.954677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.954703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.954895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.954928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.955108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.955134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.955304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.955329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.955474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.955500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.955647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.955673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.955823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.955848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.956011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.956038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.956210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.956236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.956382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.956409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.956588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.956614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.956785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.956811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.956960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.956987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.957187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.957212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.957385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.957411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.957615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.957641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.957840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.957865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.958048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.958074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.958248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.958274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.958441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.472 [2024-07-16 00:28:45.958467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.472 qpair failed and we were unable to recover it. 00:24:54.472 [2024-07-16 00:28:45.958643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.958673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.958823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.958848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.959042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.959068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.959245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.959270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.959447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.959473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.959649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.959675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.959850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.959880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.960068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.960094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.960295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.960320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.960473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.960499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.960678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.960703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.960903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.960929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.961104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.961129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.961275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.961301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.961451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.961477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.961646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.961671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.961819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.961845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.962027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.962052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.962218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.962244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.962417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.962444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.962621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.962647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.962826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.962852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.963005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.963031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.963235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.963261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.963432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.963459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.963606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.963632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.963804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.963830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.473 qpair failed and we were unable to recover it. 00:24:54.473 [2024-07-16 00:28:45.963989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.473 [2024-07-16 00:28:45.964016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.964172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.964198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.964347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.964373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.964544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.964570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.964716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.964742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.964916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.964942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.965098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.965124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.965298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.965323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.965526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.965551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.965707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.965732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.965887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.965914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.966090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.966116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.966252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.966278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.966450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.966479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.966655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.966682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.966826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.966852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.967043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.967070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.967268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.967294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.967441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.967466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.967647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.967672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.967869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.967901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.968048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.968074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.968251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.968276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.968476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.968502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.968703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.968729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.968882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.968908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.474 [2024-07-16 00:28:45.969056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.474 [2024-07-16 00:28:45.969082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.474 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.969260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.969286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.969453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.969479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.969652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.969678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.969827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.969853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.970031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.970058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.970208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.970234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.970406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.970433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.970606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.970631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.970789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.970815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.970987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.971014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.971166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.971192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.971373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.971399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.971580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.971606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.971788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.971818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.972021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.972048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.972198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.972225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.972376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.972403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.972583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.972609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.972809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.972836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.973014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.973040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.973189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.973215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.973355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.973381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.973543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.973569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.973742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.475 [2024-07-16 00:28:45.973768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.475 qpair failed and we were unable to recover it. 00:24:54.475 [2024-07-16 00:28:45.973918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.973944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.974122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.974148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.974321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.974347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.974509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.974535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.974737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.974762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.974909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.974935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.975133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.975159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.975333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.975359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.975531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.975557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.975716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.975742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.975916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.975943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.976102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.976129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.976309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.976335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.976511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.976537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.976684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.976709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.976861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.976892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.977076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.977103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.977300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.977326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.977470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.977495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.977666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.977691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.977867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.977898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.978074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.978100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.978274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.978301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.978488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.978515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.978691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.978717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.978866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.978897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.979052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.476 [2024-07-16 00:28:45.979078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.476 qpair failed and we were unable to recover it. 00:24:54.476 [2024-07-16 00:28:45.979250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.979276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.979421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.979447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.979627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.979657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.979809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.979835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.980019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.980046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.980217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.980243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.980411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.980436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.980589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.980616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.980797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.980823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.980998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.981024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.981202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.981228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.981368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.981394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.981594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.981620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.981766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.981792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.981949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.981976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.982133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.982158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.982338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.982363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.982562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.982588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.982762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.982787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.982953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.982979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.983127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.983152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.983296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.983321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.983521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.983547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.983719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.983744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.983920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.983946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.984089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.984115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.984289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.477 [2024-07-16 00:28:45.984314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.477 qpair failed and we were unable to recover it. 00:24:54.477 [2024-07-16 00:28:45.984469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.984494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.984696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.984722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.984886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.984912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.985084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.985110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.985318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.985343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.985524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.985550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.985700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.985725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.985882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.985908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.986059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.986085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.986244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.986270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.986439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.986464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.986639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.986664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.986866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.986898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.987104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.987136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.987284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.987310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.987455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.987485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.987632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.987658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.987830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.987856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.988055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.988081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.988265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.988290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.988448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.988473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.988647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.988673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.988865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.988896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.989048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.989074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.989274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.989300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.989451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.989476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.989654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.989680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.989821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.989848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.990069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.478 [2024-07-16 00:28:45.990095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.478 qpair failed and we were unable to recover it. 00:24:54.478 [2024-07-16 00:28:45.990279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.990305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.990494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.990520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.990697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.990722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.990899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.990932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.991072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.991098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.991276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.991302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.991445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.991471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.991647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.991673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.991823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.991848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.992062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.992088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.992243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.992269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.992417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.992443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.992620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.992645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.992849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.992875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.993027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.993053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.993198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.993223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.993401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.993426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.993598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.993624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.993797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.993822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.993981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.994007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.994213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.994239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.994414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.994440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.994597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.994623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.994767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.994792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.994992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.995018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.995195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.995222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.995366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.995395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.479 qpair failed and we were unable to recover it. 00:24:54.479 [2024-07-16 00:28:45.995594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.479 [2024-07-16 00:28:45.995620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.995798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.995823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.995989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.996015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.996166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.996192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.996348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.996373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.996569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.996595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.996766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.996793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.996948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.996975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.997156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.997182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.997358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.997384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.997562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.997587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.997758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.997784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.997932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.997958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.998135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.998161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.998362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.998394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.998545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.998571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.998771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.998797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.998942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.998968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.999134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.999160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.999360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.999386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.999558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.999583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.999725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.999750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:45.999931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:45.999957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:46.000128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:46.000154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:46.000357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:46.000382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:46.000586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:46.000612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:46.000762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:46.000787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:46.000963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:46.000989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.480 [2024-07-16 00:28:46.001131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.480 [2024-07-16 00:28:46.001156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.480 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.001335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.001360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.001536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.001562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.001729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.001754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.001932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.001958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.002133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.002158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.002366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.002391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.002563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.002593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.002742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.002768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.002949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.002975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.003154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.003179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.003354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.003384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.003558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.003584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.003727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.003753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.003903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.003929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.004103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.004129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.004270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.004296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.004471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.004497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.004663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.004689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.004862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.004903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.005072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.005098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.481 [2024-07-16 00:28:46.005242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.481 [2024-07-16 00:28:46.005268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.481 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.005442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.005468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.005617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.005643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.005820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.005845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.006036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.006062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.006232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.006257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.006405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.006430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.006606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.006633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.006782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.006808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.006973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.006999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.007143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.007169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.007343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.007369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.007569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.007594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.007737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.007763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.007937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.007964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.008112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.008137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.008337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.008362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.008520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.008546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.008688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.008714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.008910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.008937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.009138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.009163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.009312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.009337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.009513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.009538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.009707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.009733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.009886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.009912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.010057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.010084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.010263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.010288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.010463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.482 [2024-07-16 00:28:46.010489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.482 qpair failed and we were unable to recover it. 00:24:54.482 [2024-07-16 00:28:46.010667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.010692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.010870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.010901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.011059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.011089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.011263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.011288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.011461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.011487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.011684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.011710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.011886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.011912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.012084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.012110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.012260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.012286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.012457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.012482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.012630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.012655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.012836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.012862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.013029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.013055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.013255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.013281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.013454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.013480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.013653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.013679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.013828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.013856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.014010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.014037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.014177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.014204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.014374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.014399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.014550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.014575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.014749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.014775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.014943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.014969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.015119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.015145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.015355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.015380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.015522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.015547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.015725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.015751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.015930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.483 [2024-07-16 00:28:46.015957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.483 qpair failed and we were unable to recover it. 00:24:54.483 [2024-07-16 00:28:46.016135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.016161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.016316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.016343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.016541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.016567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.016739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.016765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.016931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.016957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.017104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.017130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.017281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.017307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.017463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.017490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.017660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.017686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.017860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.017891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.018039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.018065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.018205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.018230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.018403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.018428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.018628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.018653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.018806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.018836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.018984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.019010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.019163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.019190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.019369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.019396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.019565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.019591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.019791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.019816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.019967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.019993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.020161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.020186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.020362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.020388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.020553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.020579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.020757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.020783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.020941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.020967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.021127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.021153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.021304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.021330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.021507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.484 [2024-07-16 00:28:46.021533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.484 qpair failed and we were unable to recover it. 00:24:54.484 [2024-07-16 00:28:46.021706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.021731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.021907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.021938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.022104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.022130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.022335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.022360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.022503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.022528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.022684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.022710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.022888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.022914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.023082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.023108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.023267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.023292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.023471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.023497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.023643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.023669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.023851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.023882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.024042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.024069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.024215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.024241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.024416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.024442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.485 [2024-07-16 00:28:46.024579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.485 [2024-07-16 00:28:46.024605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.485 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.024762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.024789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.024963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.024990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.025147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.025172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.025320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.025346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.025527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.025553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.025696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.025722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.025879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.025906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.026085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.026111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.026262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.026288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.026475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.026505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.026673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.026698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.026882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.026908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.027079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.027104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.027258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.027284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.027492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.027518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.027694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.767 [2024-07-16 00:28:46.027720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.767 qpair failed and we were unable to recover it. 00:24:54.767 [2024-07-16 00:28:46.027919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.027945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.028100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.028127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.028298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.028323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.028471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.028498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.028673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.028699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.028861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.028902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.029057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.029083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.029238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.029264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.029411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.029436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.029583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.029609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.029814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.029840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.030020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.030047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.030217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.030243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.030420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.030445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.030588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.030614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.030791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.030817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.031002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.031028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.031201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.031227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.031406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.031432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.031606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.031631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.031802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.031829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.031982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.032009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.032155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.032181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.032357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.032383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.032560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.032586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.032738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.032765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.032953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.032989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.033145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.033171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.033375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.033401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.768 [2024-07-16 00:28:46.033599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.768 [2024-07-16 00:28:46.033626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.768 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.033823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.033859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.034008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.034034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.034223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.034249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.034395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.034426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.034627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.034653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.034832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.034858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.035078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.035104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.035279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.035305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.035481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.035507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.035709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.035735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.035909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.035936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.036092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.036117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.036291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.036317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.036491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.036517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.036724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.036751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.036897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.036924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.037101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.037128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.037287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.037313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.037459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.037484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.037689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.037715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.037892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.037918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.038100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.038126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.038293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.038318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.038472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.038498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.038648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.038674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.038845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.038871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.039026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.039052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.039224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.039249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.039400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.039426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.039624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.039649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.039824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.039851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.040050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.040076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.040254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.040281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.040456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.040482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.040653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.040679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.040855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.040898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.041043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.041069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.769 [2024-07-16 00:28:46.041223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.769 [2024-07-16 00:28:46.041249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.769 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.041418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.041443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.041643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.041669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.041817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.041844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.042049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.042075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.042274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.042300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.042476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.042506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.042677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.042703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.042846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.042872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.043066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.043092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.043264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.043289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.043465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.043491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.043657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.043683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.043853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.043886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.044037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.044063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.044206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.044232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.044411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.044437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.044588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.044615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.044773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.044799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.044978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.045005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.045182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.045208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.045380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.045406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.045614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.045640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.045816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.045843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.046002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.046028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.046204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.046231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.046408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.046434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.046584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.046609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.046783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.046811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.046963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.046990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.047132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.047158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.047339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.047365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.047510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.047535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.047736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.047762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.047902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.047928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.048086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.048112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.048289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.048315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.048511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.048536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.048686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.048713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.048894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.048920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.049101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.049127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.770 [2024-07-16 00:28:46.049278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.770 [2024-07-16 00:28:46.049304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.770 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.049507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.049533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.049710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.049737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.049915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.049941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.050084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.050109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.050284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.050314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.050474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.050500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.050677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.050703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.050886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.050912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.051119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.051145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.051302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.051327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.051474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.051500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.051671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.051698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.051874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.051906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.052052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.052078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.052231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.052256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.052456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.052482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.052637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.052663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.052835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.052861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.053085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.053111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.053290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.053316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.053489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.053515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.053685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.053711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.053890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.053916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.054069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.054096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.054249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.054274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.054430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.054456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.054655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.054682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.054850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.054880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.055026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.055051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.055218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.055244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.055424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.055450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.055601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.055628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.055829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.055855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.056034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.056060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.056217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.056242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.056419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.056445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.056595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.056620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.056790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.056815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.057000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.057027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.771 qpair failed and we were unable to recover it. 00:24:54.771 [2024-07-16 00:28:46.057178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.771 [2024-07-16 00:28:46.057204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.057412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.057438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.057591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.057617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.057818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.057844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.058020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.058046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.058224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.058254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.058406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.058432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.058606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.058632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.058779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.058805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.058951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.058979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.059154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.059179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.059352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.059378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.059549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.059575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.059745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.059771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.059952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.059978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.060133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.060159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.060363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.060389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.060550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.060577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.060754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.060780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.060957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.060984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.061160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.061186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.061358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.061384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.061555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.061581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.061729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.061754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.061902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.061929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.062073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.062099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.062245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.062271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.062442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.062468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.062639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.062665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.062841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.062867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.063045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.063071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.063252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.063277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.063459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.063485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.063657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.063683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.772 [2024-07-16 00:28:46.063857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-16 00:28:46.063888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.772 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.064041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.064068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.064240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.064266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.064415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.064441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.064595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.064621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.064799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.064825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.065005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.065032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.065215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.065241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.065412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.065437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.065611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.065636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.065785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.065811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.066009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.066038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.066248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.066274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.066429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.066457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.066639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.066665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.066836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.066862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.067022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.067048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.067206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.067242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.067429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.067456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.067597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.067622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.067793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.067820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.068009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.068037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.068192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.068218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.068385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.068411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.068561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.068588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.068765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.068791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.068994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.069022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.069181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.069216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.069371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.069398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.069579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.069605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.069782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.069808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.069976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.070003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.070170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.070196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.070372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.070399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.070578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.070604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.070773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.070799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.070975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.071002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.071151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.071177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.071350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.071379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.071560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.071587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.071742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.071769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.773 qpair failed and we were unable to recover it. 00:24:54.773 [2024-07-16 00:28:46.071922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.773 [2024-07-16 00:28:46.071948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.072099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.072126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.072308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.072334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.072494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.072522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.072721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.072747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.072928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.073081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.073108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.073285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.073312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.073461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.073486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.073635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.073661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.073806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.073832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.073990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.074017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.074186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.074212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.074410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.074437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.074638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.074664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.074812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.074838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.075050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.075077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.075253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.075279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.075431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.075458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.075618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.075644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.075795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.075821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.075974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.076000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.076155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.076180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.076343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.076369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.076519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.076544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.076691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.076717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.076868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.076919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.077107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.077134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.077280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.077306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.077484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.077510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.077683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.077709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.077853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.077887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.078046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.078072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.078246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.078272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.078443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.078468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.078617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.078643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.078797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.078822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.079010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.079041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.079219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.079246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.079409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.079435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.079606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.774 [2024-07-16 00:28:46.079632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.774 qpair failed and we were unable to recover it. 00:24:54.774 [2024-07-16 00:28:46.079778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.079804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.080005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.080031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.080186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.080212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.080363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.080389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.080541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.080568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.080713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.080739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.080885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.080923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.081121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.081146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.081322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.081348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.081526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.081552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.081711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.081737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.081907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.081933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.082099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.082124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.082282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.082309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.082474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.082500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.082651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.082677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.082889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.082916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.083061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.083087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.083242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.083268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.083419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.083445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.083622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.083648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.083794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.083821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.083991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.084018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.084180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.084206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.084345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.084371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.084516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.084542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.084717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.084742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.084945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.084971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.085131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.085157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.085333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.085358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.085530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.085555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.775 [2024-07-16 00:28:46.085733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.775 [2024-07-16 00:28:46.085758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.775 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.085903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.085929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.086115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.086140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.086331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.086357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.086529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.086554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.086702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.086732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.086919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.086966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.087158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.087184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.087373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.087399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.087566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.087592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.087788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.087814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.088000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.088026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.088204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.088231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.088383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.088408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.088559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.088586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.088731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.088758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.088927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.088953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.089140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.089166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.089338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.089364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.089519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.089544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.089686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.089712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.089850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.089881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.090059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.090084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.090257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.090283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.090490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.090515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.090666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.090692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.776 qpair failed and we were unable to recover it. 00:24:54.776 [2024-07-16 00:28:46.090841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.776 [2024-07-16 00:28:46.090867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.091021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.091046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.091198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.091224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.091371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.091396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.091553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.091580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.091735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.091760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.091945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.091971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.092142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.092168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.092360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.092387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.092564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.092589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.092740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.092766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.092973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.092999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.093167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.093192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.093340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.093366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.093532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.093558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.093758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.093784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.093958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.093984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.094157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.094182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.094359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.094385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.094585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.094615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.094766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.094792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.094943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.094969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.095136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.095162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.095366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.777 [2024-07-16 00:28:46.095391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.777 qpair failed and we were unable to recover it. 00:24:54.777 [2024-07-16 00:28:46.095536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.095561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.095724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.095750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.095904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.095930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.096112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.096138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.096320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.096346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.096522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.096548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.096695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.096721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.096910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.096937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.097126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.097152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.097353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.097379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.097529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.097556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.097732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.097757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.097960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.097986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.098166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.098192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.098359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.098385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.098525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.098551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.098695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.098720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.098882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.098908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.099078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.099104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.099253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.099278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.099425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.099450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.099651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.099677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.099864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.099897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.100078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.100115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.100302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.100328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.100501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.100527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.100693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.100717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.100868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.100899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.778 qpair failed and we were unable to recover it. 00:24:54.778 [2024-07-16 00:28:46.101078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.778 [2024-07-16 00:28:46.101104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.101256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.101281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.101444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.101469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.101636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.101661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.101802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.101828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.101989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.102014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.102166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.102193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.102381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.102411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.102584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.102610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.102755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.102781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.102955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.102981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.103125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.103151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.103305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.103331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.103509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.103535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.103708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.103733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.103935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.103962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.104133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.104165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.104340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.104365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.104545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.104571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.104747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.104772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.104943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.104970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.105146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.105173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.105315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.105340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.105524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.105550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.105701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.105727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.105908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.105935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.106102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.106149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.106365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.779 [2024-07-16 00:28:46.106394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.779 qpair failed and we were unable to recover it. 00:24:54.779 [2024-07-16 00:28:46.106551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.106578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.106741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.106769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.106950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.106980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.107145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.107172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.107320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.107348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.107499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.107532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.107721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.107748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.107920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.107948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.108144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.108173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.108335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.108362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.108517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.108544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.108732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.108759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.108914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.108945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.109122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.109153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.109333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.109360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.109510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.109537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.109715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.109742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.109909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.109937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.110094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.110127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.110279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.110310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.110516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.110543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.110929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.110959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.111112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.111138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.111296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.111324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.111509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.111536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.111685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.111712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.111894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.111923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.112079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.112107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.780 [2024-07-16 00:28:46.112301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.780 [2024-07-16 00:28:46.112327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.780 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.112526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.112553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.112706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.112740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.112949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.112983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.113149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.113177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.113330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.113550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.113577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.113731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.113765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.113948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.113977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.114133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.114163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.114348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.114375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.114526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.114558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.114750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.114778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.114977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.115005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.115186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.115213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.115366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.115398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.115604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.115632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.115797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.115826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.116036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.116067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.116252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.116278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.116485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.116513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.116665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.116694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.116903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.116930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.117090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.117117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.117297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.117324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.117505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.117534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.117708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.117738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.117894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.117923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.781 [2024-07-16 00:28:46.118097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.781 [2024-07-16 00:28:46.118124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.781 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.118264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.118296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.118452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.118479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.118627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.118659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.118810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.118838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.119002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.119030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.119204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.119231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.119389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.119421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.119582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.119609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.119781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.119808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.120005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.120035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.120190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.120216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.120399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.120426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.120583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.120610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.120764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.120793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.120986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.121014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.121169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.121199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.121387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.121419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.121602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.121629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.121773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.121799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.121962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.121990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.122150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.122177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.122330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.122356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.122516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.122543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.122718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.122745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.122963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.122992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.123143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.123169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.123327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.123354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.782 qpair failed and we were unable to recover it. 00:24:54.782 [2024-07-16 00:28:46.123533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.782 [2024-07-16 00:28:46.123562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.123747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.123774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.123959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.123987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.124165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.124192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.124381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.124408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.124557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.124584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.124759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.124789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.124981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.125008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.125163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.125197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.125360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.125387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.125538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.125565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.125740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.125767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.125969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.125997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.126141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.126173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.126359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.126386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.126559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.126590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.126741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.126768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.126948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.126974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.127158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.127186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.127358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.127385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.127539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.127566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.127749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.783 [2024-07-16 00:28:46.127776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.783 qpair failed and we were unable to recover it. 00:24:54.783 [2024-07-16 00:28:46.127927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.127955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.128116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.128148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.128312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.128339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.128482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.128509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.128660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.128691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.128903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.128930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.129107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.129134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.129346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.129374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.129523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.129552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.129741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.129768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.129920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.129948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.130094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.130121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.130312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.130340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.130494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.130527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.130681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.130714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.130931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.130959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.131112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.131144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.131334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.131361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.131511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.131538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.131715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.131742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.131927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.131955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.132164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.132192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.132365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.132391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.132596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.132624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.132768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.132795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.132959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.132986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.133145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.133173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.133347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.784 [2024-07-16 00:28:46.133375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.784 qpair failed and we were unable to recover it. 00:24:54.784 [2024-07-16 00:28:46.133553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.133585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.133741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.133767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.133915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.133947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.134125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.134151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.134338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.134365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.134504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.134715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.134746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.134910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.134937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.135114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.135141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.135316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.135345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.135496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.135523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.135702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.135730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.135888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.135921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.136101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.136128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.136305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.136334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.136506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.136538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.136720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.136753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.136945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.136973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.137126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.137153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.137308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.137334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.137521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.137548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.137730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.137757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.137920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.137949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.138153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.138179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.138328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.138355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.138506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.138533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.138708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.138735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.138908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.138935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.139110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.139136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.139309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.139335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.139513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.139540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.139684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.139710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.139898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.139930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.140085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.140112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.140292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.140318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.140491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.140517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.140662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.140687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.140864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.140897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.141049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.141075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.141253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.141279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.785 qpair failed and we were unable to recover it. 00:24:54.785 [2024-07-16 00:28:46.141451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.785 [2024-07-16 00:28:46.141477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.141630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.141656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.141839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.141864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.142073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.142099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.142240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.142266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.142443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.142476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.142654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.142680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.142829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.142855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.143050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.143077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.143254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.143280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.143479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.143505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.143658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.143684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.143856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.143887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.144038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.144063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.144236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.144262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.144413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.144440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.144604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.144629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.144827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.144852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.145015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.145041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.145227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.145253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.145434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.145459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.145634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.145660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.145858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.145890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.146064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.146089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.146232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.146257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.146435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.146461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.146636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.146662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.146821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.146847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.146999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.147025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.147200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.147225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.147377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.147403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.147546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.147572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.147719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.147745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.147896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.147923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.148097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.148124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.148296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.148322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.148492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.148518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.148689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.148715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.148858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.148889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.149104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.149130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.149317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.786 [2024-07-16 00:28:46.149343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.786 qpair failed and we were unable to recover it. 00:24:54.786 [2024-07-16 00:28:46.149487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.149513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.149700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.149726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.149871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.149905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.150061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.150087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.150284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.150314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.150485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.150510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.150687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.150712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.150857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.150889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.151071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.151097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.151268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.151293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.151444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.151470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.151653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.151679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.151833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.151859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.152031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.152056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.152228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.152253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.152422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.152448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.152622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.152647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.152823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.152848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.153012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.153038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.153241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.153266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.153415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.153440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.153610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.153635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.153802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.153828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.153994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.154020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.154165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.154192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.154350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.154375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.154548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.154574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.154746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.154771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.154944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.154970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.155117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.155143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.155314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.155340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.155523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.155548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.155727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.155752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.155909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.155935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.156110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.156136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.156307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.156332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.156509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.156535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.156693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.156718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.156893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.156919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.157072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.157098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.157255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.787 [2024-07-16 00:28:46.157281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.787 qpair failed and we were unable to recover it. 00:24:54.787 [2024-07-16 00:28:46.157457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.157482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.157652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.157678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.157894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.157920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.158069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.158099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.158275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.158300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.158437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.158462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.158638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.158665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.158833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.158858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.159041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.159066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.159209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.159235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.159441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.159466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.159610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.159635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.159792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.159817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.160015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.160041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.160190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.160217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.160379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.160405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.160607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.160632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.160838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.160864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.161048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.161074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.161227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.161252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.161427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.161453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.161638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.161663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.161815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.161841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.161987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.162013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.162186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.162211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.162386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.162412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.162588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.162613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.162785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.162811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.162963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.162989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.163165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.163191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.163367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.163393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.163544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.163571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.163778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.163804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.163980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.164007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.164209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.164235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.788 qpair failed and we were unable to recover it. 00:24:54.788 [2024-07-16 00:28:46.164380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.788 [2024-07-16 00:28:46.164405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.164554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.164580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.164735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.164760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.164941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.164967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.165109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.165135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.165335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.165360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.165538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.165563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.165738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.165764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.165940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.165971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.166120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.166146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.166350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.166375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.166521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.166546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.166728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.166754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.166907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.166934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.167145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.167171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.167317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.167342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.167515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.167541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.167715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.167740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.167918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.167944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.168095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.168120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.168295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.168321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.168460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.168486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.168654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.168680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.168854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.168884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.169042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.169067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.169237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.169262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.169441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.169466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.169636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.169662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.169813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.169838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.169985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.170011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.170186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.170211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.170390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.170415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.170615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.170640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.170819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.170844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.171024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.171050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.171224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.171250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.171398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.171423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.171591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.171617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.171818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.171844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.172056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.172082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.172236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.789 [2024-07-16 00:28:46.172261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.789 qpair failed and we were unable to recover it. 00:24:54.789 [2024-07-16 00:28:46.172432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.172457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.172625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.172651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.172852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.172883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.173042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.173068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.173208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.173234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.173373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.173399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.173548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.173574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.173720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.173746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.173899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.173926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.174097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.174123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.174271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.174296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.174441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.174466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.174668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.174693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.174871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.174902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.175069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.175095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.175233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.175258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.175458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.175484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.175633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.175659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.175811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.175837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.176014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.176039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.176216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.176242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.176427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.176453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.176630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.176656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.176811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.176837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.177041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.177067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.177238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.177264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.177419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.177445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.177597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.177622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.177797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.177823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.177991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.178017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.178218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.178243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.178441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.178466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.178615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.178640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.178814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.178840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.179020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.179050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.179204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.179229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.179401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.179427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.179603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.179629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.179786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.179812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.179988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.180014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.180161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.790 [2024-07-16 00:28:46.180187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.790 qpair failed and we were unable to recover it. 00:24:54.790 [2024-07-16 00:28:46.180362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.180389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.180548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.180573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.180744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.180769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.180921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.180947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.181096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.181121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.181277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.181303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.181453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.181479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.181683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.181709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.181904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.181930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.182101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.182127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.182296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.182322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.182520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.182546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.182719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.182744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.182909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.182936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.183107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.183132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.183307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.183333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.183484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.183509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.183683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.183708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.183882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.183908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.184108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.184133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.184312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.184337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.184509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.184534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.184687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.184713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.184851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.184881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.185083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.185109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.185250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.185276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.185450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.185476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.185649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.185676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.185821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.185846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.186022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.186048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.186250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.186276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.186449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.186474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.186641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.186666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.186844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.186874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.187052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.187077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.187221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.187247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.187419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.187444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.187656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.187682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.187847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.187873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.188059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.188084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.791 [2024-07-16 00:28:46.188239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.791 [2024-07-16 00:28:46.188264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.791 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.188460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.188485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.188651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.188676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.188830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.188856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.189004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.189030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.189203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.189229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.189399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.189425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.189602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.189628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.189794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.189820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.189969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.189996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.190144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.190170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.190370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.190395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.190550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.190576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.190729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.190754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.190937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.190963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.191145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.191170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.191319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.191344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.191496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.191522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.191678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.191704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.191905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.191931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.192089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.192115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.192264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.192289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.192491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.192516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.192666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.192691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.192862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.192892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.193096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.193122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.193297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.193322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.193490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.193515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.193689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.193715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.193899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.193926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.194104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.194130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.194328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.194354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.194526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.194551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.194728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.194757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.194939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.194965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.792 [2024-07-16 00:28:46.195114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.792 [2024-07-16 00:28:46.195139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.792 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.195285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.195311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.195484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.195510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.195682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.195707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.195880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.195905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.196074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.196100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.196255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.196280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.196457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.196484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.196656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.196681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.196820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.196846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.197034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.197060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.197204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.197230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.197388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.197415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.197558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.197584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.197727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.197753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.197912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.197939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.198116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.198142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.198317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.198342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.198518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.198544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.198687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.198713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.198897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.198924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.199072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.199098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.199299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.199325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.199497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.199522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.199674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.199699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.199892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.199918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.200098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.200124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.200296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.200322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.200494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.200520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.200688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.200714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.200884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.200910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.201058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.201085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.201234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.201261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.201407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.201433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.201606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.201632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.201783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.201808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.201985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.202012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.202180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.202206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.202407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.202437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.202614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.202641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.202816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.793 [2024-07-16 00:28:46.202841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.793 qpair failed and we were unable to recover it. 00:24:54.793 [2024-07-16 00:28:46.202993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.203019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.203171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.203197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.203355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.203380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.203529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.203555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.203735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.203762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.203921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.203947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.204127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.204153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.204328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.204354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.204534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.204560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.204728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.204754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.204904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.204931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.205115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.205141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.205342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.205368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.205565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.205591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.205761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.205787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.205962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.205988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.206158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.206185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.206340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.206366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.206523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.206548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.206724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.206749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.206905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.206932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.207105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.207131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.207283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.207309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.207487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.207512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.207698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.207725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.207906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.207932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.208109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.208135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.208288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.208314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.208482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.208507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.208674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.208700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.208886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.208912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.794 [2024-07-16 00:28:46.209057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.794 [2024-07-16 00:28:46.209082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.794 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.209256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.209281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.209432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.209457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.209640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.209665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.209837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.210048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.210074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.210241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.210271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.210417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.210442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.210608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.210633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.210782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.210807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.210986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.211013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.211166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.211192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.211332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.211357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.211498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.211524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.211677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.211704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.211855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.211885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.212063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.212089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.212284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.212310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.212453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.212479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.212675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.212700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.212883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.212910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.213106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.213132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.213302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.213328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.213525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.213551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.213725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.213751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.213926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.213952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.214096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.214122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.214326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.214352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.214502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.214527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.214665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.214690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.214861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.214901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.215081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.215107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.215284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.215309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.215485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.215511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.215687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.215712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.215889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.215915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.216089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.216116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.216289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.216315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.216456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.216481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.216656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.216682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.216828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.216854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.795 qpair failed and we were unable to recover it. 00:24:54.795 [2024-07-16 00:28:46.217037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.795 [2024-07-16 00:28:46.217062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.217240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.217266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.217417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.217442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.217620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.217646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.217826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.217852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.218042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.218093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.218295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.218320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.218465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.218491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.218658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.218683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.218826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.218851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.219008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.219034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.219184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.219209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.219359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.219385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.219565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.219590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.219733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.219758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.219915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.220135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.220161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.220306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.220332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.220510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.220536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.220690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.220715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.220866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.220896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.221044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.221069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.221240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.221266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.221409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.221435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.221612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.221637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.221792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.221819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.222000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.222026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.222176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.222201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.222397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.222422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.222595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.222620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.222799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.222825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.223006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.223032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.796 qpair failed and we were unable to recover it. 00:24:54.796 [2024-07-16 00:28:46.223236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.796 [2024-07-16 00:28:46.223262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.223413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.223438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.223622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.223647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.223826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.223851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.224009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.224035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.224212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.224238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.224403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.224429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.224601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.224626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.224771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.224796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.224992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.225018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.225193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.225219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.225369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.225395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.225566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.225591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.225748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.225777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.225928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.225955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.226130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.226156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.226336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.226361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.226514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.226540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.226695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.226722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.226899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.226925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.227125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.227150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.227353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.227378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.227531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.227558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.227731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.227757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.227953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.227980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.228153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.228179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.228318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.228344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.228529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.228554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.228727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.228752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.228930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.228956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.229106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.229131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.229311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.229337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.229488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.229515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.229686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.229712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.797 qpair failed and we were unable to recover it. 00:24:54.797 [2024-07-16 00:28:46.229889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.797 [2024-07-16 00:28:46.229915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.230058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.230085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.230259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.230285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.230489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.230515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.230696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.230721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.230907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.230941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.231100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.231127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.231284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.231310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.231493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.231518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.231663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.231689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.231867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.231899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.232074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.232100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.232248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.232274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.232452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.232478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.232675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.232700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.232846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.232872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.233058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.233084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.233233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.233260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.233434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.233461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.233610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.233641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.233816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.233842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.234052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.234079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.234231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.234258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.234447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.234473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.234661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.234687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.234861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.234894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.235062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.235087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.235249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.235274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.235482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.235507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.235653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.235678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.235845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.235870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.236049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.236074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.236252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.236277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.236454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.236479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.236627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.236652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.798 [2024-07-16 00:28:46.236798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.798 [2024-07-16 00:28:46.236824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.798 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.237022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.237048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.237249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.237275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.237447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.237472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.237650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.237676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.237852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.237882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.238055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.238080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.238225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.238252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.238428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.238454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.238596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.238622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.238775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.238800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.238988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.239015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.239165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.239190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.239363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.239389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.239558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.239584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.239728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.239754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.239952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.239978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.240151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.240177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.240354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.240381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.240559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.240584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.240724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.240749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.240952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.240978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.241119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.241144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.241317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.241342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.241519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.241549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.241725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.241751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.241908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.241934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.242110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.242135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.242312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.242337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.242491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.242516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.242690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.242715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.242892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.242919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.243092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.243118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.243287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.243313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.243465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.243492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.243663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.243688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.799 qpair failed and we were unable to recover it. 00:24:54.799 [2024-07-16 00:28:46.243892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.799 [2024-07-16 00:28:46.243918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.244096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.244122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.244299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.244325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.244494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.244520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.244695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.244721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.244899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.244925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.245066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.245091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.245299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.245325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.245506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.245531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.245684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.245710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.245886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.245912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.246064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.246090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.246267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.246293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.246464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.246490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.246633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.246659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.246798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.246825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.246977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.247003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.247182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.247209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.247347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.247372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.247573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.247598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.247773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.247799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.247970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.247996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.248174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.248200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.248375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.248401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.248574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.248600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.248750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.248775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.248950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.248976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.249173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.249198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.249370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.249400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.249599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.249624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.249792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.249818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.250019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.250045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.250196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.250222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.250378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.250404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.250577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.250603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.250745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.250771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.250967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.250993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.251139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.251164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.251337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.251362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.251510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.800 [2024-07-16 00:28:46.251536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.800 qpair failed and we were unable to recover it. 00:24:54.800 [2024-07-16 00:28:46.251735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.251761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.251919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.251945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.252123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.252150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.252322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.252348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.252556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.252582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.252729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.252755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.252937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.252963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.253109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.253135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.253314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.253341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.253544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.253570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.253742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.253767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.253949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.253975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.254147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.254173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.254328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.254354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.254532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.254557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.254735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.254761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.254966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.254992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.255163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.255189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.255335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.255361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.255557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.255582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.255777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.255802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.256005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.256031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.256204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.256230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.256379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.256405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.256602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.256627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.256830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.256856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.257067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.257093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.257265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.257290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.257464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.257494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.257669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.257695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.257872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.257903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.258058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.258084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.258264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.258290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.801 [2024-07-16 00:28:46.258462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.801 [2024-07-16 00:28:46.258488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.801 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.258633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.258658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.258810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.258836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.258999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.259027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.259178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.259205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.259403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.259429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.259605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.259630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.259807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.259834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.260013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.260040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.260246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.260273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.260419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.260445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.260642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.260668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.260833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.260858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.261003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.261029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.261173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.261199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.261399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.261425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.261570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.261595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.261767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.261793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.261963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.261990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.262159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.262185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.262338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.262365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.262541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.262567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.262748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.262774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.262928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.262954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.263153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.263178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.263377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.263402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.263580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.263605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.263755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.263782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.263942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.263968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.264144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.264169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.264350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.264376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.264551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.264576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.264730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.264756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.264931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.264957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.265110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.265137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.265345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.265375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.265553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.265578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.265759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.265785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.265987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.266013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.266190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.266215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.802 [2024-07-16 00:28:46.266393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.802 [2024-07-16 00:28:46.266419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.802 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.266595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.266621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.266797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.266823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.266966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.266993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.267171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.267198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.267399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.267425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.267608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.267634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.267810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.267836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.267995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.268021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.268195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.268221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.268399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.268424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.268599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.268625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.268807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.268832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.269037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.269063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.269235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.269261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.269460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.269486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.269639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.269665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.269834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.269860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.270019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.270045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.270243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.270269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.270443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.270469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.270669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.270694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.270893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.270933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.271099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.271126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.271307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.271333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.271478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.271504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.271679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.271704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.271857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.271891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.272067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.272093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.272240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.272265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.272440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.272466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.272665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.272691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.272860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.272894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.273037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.273063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.273209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.273234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.273384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.273410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.273560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.273585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.273781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.273808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.273962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.273989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.274169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.274195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.274361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.274387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.803 [2024-07-16 00:28:46.274536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.803 [2024-07-16 00:28:46.274561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.803 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.274712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.274737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.274889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.274916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.275085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.275110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.275292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.275318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.275500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.275525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.275737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.275763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.275942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.275968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.276122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.276153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.276295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.276321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.276490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.276515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.276679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.276705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.276870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.276900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.277077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.277103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.277281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.277307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.277445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.277470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.277646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.277671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.277818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.277843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.278003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.278029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.278176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.278201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.278369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.278395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.278547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.278572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.278752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.278777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.278977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.279004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.279156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.279181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.279351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.279376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.279528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.279553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.279722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.279747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.279896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.279923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.280066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.280092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.280263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.280288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.280465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.280490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.280676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.280701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.280870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.280900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.281045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.281071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.281225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.281251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.281463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.281489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.281663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.281689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.281887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.281913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.282055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.282081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.282228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.282253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.804 [2024-07-16 00:28:46.282429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.804 [2024-07-16 00:28:46.282454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.804 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.282628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.282653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.282849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.282874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.283035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.283060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.283211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.283237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.283407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.283432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.283582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.283607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.283788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.283814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.283992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.284023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.284223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.284249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.284449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.284475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.284627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.284652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.284819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.284844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.285027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.285052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.285201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.285227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.285401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.285426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.285604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.285629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.285803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.285829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.286003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.286029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.286172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.286198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.286372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.286398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.286586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.286611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.286788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.286813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.286988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.287014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.287157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.287183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.287337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.287363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.287513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.287538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.287734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.287760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.287916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.287942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.288118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.288143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.288310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.288335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.288485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.288510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.288679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.288704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.288882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.288908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.289062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.289087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.289227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.289257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.289414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.289440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.289621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.289646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.289821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.289847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.290003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.290029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.290202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.805 [2024-07-16 00:28:46.290228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.805 qpair failed and we were unable to recover it. 00:24:54.805 [2024-07-16 00:28:46.290374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.290410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.290585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.290610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.290754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.290779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.290952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.290977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.291145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.291171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.291369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.291394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.291577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.291602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.291801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.291826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.291986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.292012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.292220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.292245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.292395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.292421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.292573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.292599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.292742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.292767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.292919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.292945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.293089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.293115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.293315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.293340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.293517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.293542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.293694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.293719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.293916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.293941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.294088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.294113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.294261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.294287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.294466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.294491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.294649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.294674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.294820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.294852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.295048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.295074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.295222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.295248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.295405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.295430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.295571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.295597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.295762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.295787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.295965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.295991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.296170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.296196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.296373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.296398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.296600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.296626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.296777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.296802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.296977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.297003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.297156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.297186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.297363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.297388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.297561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.297585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.297728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.297753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.297925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.297951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.298128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.298153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.298357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.806 [2024-07-16 00:28:46.298382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.806 qpair failed and we were unable to recover it. 00:24:54.806 [2024-07-16 00:28:46.298536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.298561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.298746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.298771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.298950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.298975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.299149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.299174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.299348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.299374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.299511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.299537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.299683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.299708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.299871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.299902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.300080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.300105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.300270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.300296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.300468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.300493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.300670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.300695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.300843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.300868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.301071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.301097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.301271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.301296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.301471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.301496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.301673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.301699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.301883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.301908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.302086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.302111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.302280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.302306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.302454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.302488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.302666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.302691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.302840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.302866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.303050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.303075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.303279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.303304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.303449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.303475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.303626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.303651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.303799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.303824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.304001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.304027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.304200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.304225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.304397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.304422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.304600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.304625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.304765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.304791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.304960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.304986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.305190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.305215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.305357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.305383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.305528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.305554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.305728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.305754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.305924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.305949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.306115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.306140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.306327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.306353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.306532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.306557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.306728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.306754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.306925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.306951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.307150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.807 [2024-07-16 00:28:46.307176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.807 qpair failed and we were unable to recover it. 00:24:54.807 [2024-07-16 00:28:46.307347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.307372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.307547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.307572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.307740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.307766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.307951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.308153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.308179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.308380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.308405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.308605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.308630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.308808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.308834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.308990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.309016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.309215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.309241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.309385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.309410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.309559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.309585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.309763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.309789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.309956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.309982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.310157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.310182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.310333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.310359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.310510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.310539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.310680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.310706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.310888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.310914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.311062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.311087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.311239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.311264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.311437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.311462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.311638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.311664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.311817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.311842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.312037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.312064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.312229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.312254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.312424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.312450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.312615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.312641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.312808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.312833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.312997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.313023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.313228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.313253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.313403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.313429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.313629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.313655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.313810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.313835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.314003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.314029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.314216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.314241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.314389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.314414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.314565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.314591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.314739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.314765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.314922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.314948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.315118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.315144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.315299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.315324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.315498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.315524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.315732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.315762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.315939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.808 [2024-07-16 00:28:46.315965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.808 qpair failed and we were unable to recover it. 00:24:54.808 [2024-07-16 00:28:46.316140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.316165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.316346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.316372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.316565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.316590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.316743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.316769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.316966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.316992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.317143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.317173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.317377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.317403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.317567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.317592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.317744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.317769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.317947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.317973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.318125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.318151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.318348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.318374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.318520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.318546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.318698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.318723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.318920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.318947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.319101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.319127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.319299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.319325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.319474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.319500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.319652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.319678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.319887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.319913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.320086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.320112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.320316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.320341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.320482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.320508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.320686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.320711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.320891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.320917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.321056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.321082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.321257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.321282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.321477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.321503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.321675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.321700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.321850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.321887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.322064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.322090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.322242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.322267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.322417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.322443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.322621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.322646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.322819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.322845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.323027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.323053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.323233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.323259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.323407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.323432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.323608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.323634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.809 qpair failed and we were unable to recover it. 00:24:54.809 [2024-07-16 00:28:46.323784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.809 [2024-07-16 00:28:46.323814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.323956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.323983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.324162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.324188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.324339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.324365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.324523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.324548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.324719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.324744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.324899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.324925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.325106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.325132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.325282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.325307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.325452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.325477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.325678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.325703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.325855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.325886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.326058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.326084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.326233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.326259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.326407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.326434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.326646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.326671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.326843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.326869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.327054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.327080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.327237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.327263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.327437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.327463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.327634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.327660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.327807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.327832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.328009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.328035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.328189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.328215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.328412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.328437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.328611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.328636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.328815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.328841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.329027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.329053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.329235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.329260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.329433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.329459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.329637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.329662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.329836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.329862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.330050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.330076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.330226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.330254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.330402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.330428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.330591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.330617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.330768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.330794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.330945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.330971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.331123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.331150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.331354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.331380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.331554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.331579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.331730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.331756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.331940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.331966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.332140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.332166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.332313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.332339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.332486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.332511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.332701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.332726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.332872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.810 [2024-07-16 00:28:46.332903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.810 qpair failed and we were unable to recover it. 00:24:54.810 [2024-07-16 00:28:46.333049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.333075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.333246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.333272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.333475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.333501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.333674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.333700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.333882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.333908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.334088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.334114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.334269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.334294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.334445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.334471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.334673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.334699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.334848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.334873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.335065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.335091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.335241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.335268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.335415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.335441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.335612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.335638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.335781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.335806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.335978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.336005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.336219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.336245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.336416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.336441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.336604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.336629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.336809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.336835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.337013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.337044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.337196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.337222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.337377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.337403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.337550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.337575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.337768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.337794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.337945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.337971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.338117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.338143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.338295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.338322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.338495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.338520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.338691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.338717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.338889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.338915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.339055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.339080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.339228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.339253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.339453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.339478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.339663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.339689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.339860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.339892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.340066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.340091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.340268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.340294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.340437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.340462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.340610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.340635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.340839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.340864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.341049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.341074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.341223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.341249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.341400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.341426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.341574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.341600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.341745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.341770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.341924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.341950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.342102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.342128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.811 qpair failed and we were unable to recover it. 00:24:54.811 [2024-07-16 00:28:46.342302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.811 [2024-07-16 00:28:46.342328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.342478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.342503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.342658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.342683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.342835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.342861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.343035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.343061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.343232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.343257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.343406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.343431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.343580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.343606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.343754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.343779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.343933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.343960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.344141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.344167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.344313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.344339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.344549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.344575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.344773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.344802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.344976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.345014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.345192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.345217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.345395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.345420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.345625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.345651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.345803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.345828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.346034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.346061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.346210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.346236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.346383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.346409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.346561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.346586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.346728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.346754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.346931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.346957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.347129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.347155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.347304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.347329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.347508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.347533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.347706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.347732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.347912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.347938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.348089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.348114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.348259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.348284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.348460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.348485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.348639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.348665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.348830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.348856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.349034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.349059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.349207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.349233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.349403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.349428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.349602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.349628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.349775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.349800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.349971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.350001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.350159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.350185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.350355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.350380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.350559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.350585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.350764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.350790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.350943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.350968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.351126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.351152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.351299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.351324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.812 qpair failed and we were unable to recover it. 00:24:54.812 [2024-07-16 00:28:46.351502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.812 [2024-07-16 00:28:46.351529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.351703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.351729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.351902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.351928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.352098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.352124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.352275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.352302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.352443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.352469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.352628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.352654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.352805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.352831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.352974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.353000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.353151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.353176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.353335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.353360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.353517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.353543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.353695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.353729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.353905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.353931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.354107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.354132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.354282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.354308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.354487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.354512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.354702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.354728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.354883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.354909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.355084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.355110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.355268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.355294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.355461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.355487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.355678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.355703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.355901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.355928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.356104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.356129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:54.813 [2024-07-16 00:28:46.356281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.813 [2024-07-16 00:28:46.356307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:54.813 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.356452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.356478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.356654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.356679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.356835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.356860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.357004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.357030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.357298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.357323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.357471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.357496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.357669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.357695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.357870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.357907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.358050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.358076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.358254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.358280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.358455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.358480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.358633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.358659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.358808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.358834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.358990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.359016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.359159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.359185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.359338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.359365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.359520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.359545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.359731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.359757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.359924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.359951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.360091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.360116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.360265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.360290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.360448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.360475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.360629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.360653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.085 qpair failed and we were unable to recover it. 00:24:55.085 [2024-07-16 00:28:46.360822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.085 [2024-07-16 00:28:46.360847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.361002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.361028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.361191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.361217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.361359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.361385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.361585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.361610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.361761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.361786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.361930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.361957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.362105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.362131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.362287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.362312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.362493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.362518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.362668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.362693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.362869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.362904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.363085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.363111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.363291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.363316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.363468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.363493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.363689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.363714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.363899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.363926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.364105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.364131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.364285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.364311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.364492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.364517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.364698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.364724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.364874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.364905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.365094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.365119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.365262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.365287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.365460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.365485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.365632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.365658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.365830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.365856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.366031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.366057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.366234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.366260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.366411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.366437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.366621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.366647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.366797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.366822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.366973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.366999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.367169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.367195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.367351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.367376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.367546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.367571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.367767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.367793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.367965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.367991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.368140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.368166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.368347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.368372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.086 [2024-07-16 00:28:46.368556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.086 [2024-07-16 00:28:46.368582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.086 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.368735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.368762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.368952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.368978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.369179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.369205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.369383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.369409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.369582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.369607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.369776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.369802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.369953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.369980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.370122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.370149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.370302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.370328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.370469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.370494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.370663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.370689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.370836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.370865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.371026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.371051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.371201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.371227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.371385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.371410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.371563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.371588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.371738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.371763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.371910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.371936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.372132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.372158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.372303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.372495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.372521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.372722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.372747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.372911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.372937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.373093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.373118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.373265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.373291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.373471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.373497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.373642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.373667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.373845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.373871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.374073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.374098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.374245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.374271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.374449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.374474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.374645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.374671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.374820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.374846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.375030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.375056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.375209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.375236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.375393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.375419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.375566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.375591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.375746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.375772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.375957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.375988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.087 [2024-07-16 00:28:46.376168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.087 [2024-07-16 00:28:46.376194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.087 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.376372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.376398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.376547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.376572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.376752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.376777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.376955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.376981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.377135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.377161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.377310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.377335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.377506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.377532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.377707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.377733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.377887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.377913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.378087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.378112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.378266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.378291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.378440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.378465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.378612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.378637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.378812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.378838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.378995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.379021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.379175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.379200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.379388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.379413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.379567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.379592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.379769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.379795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.379945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.379971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.380156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.380182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.380325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.380350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.380553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.380578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.380726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.380752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.380942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.380968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.381124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.381150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.381330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.381356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.381535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.381561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.381714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.381739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.381907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.381933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.382128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.382153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.382304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.382329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.382483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.382509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.382680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.382705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.382853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.382883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.383036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.383062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.383209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.383234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.383412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.383437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.383609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.383634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.383809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.383838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.088 [2024-07-16 00:28:46.384007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.088 [2024-07-16 00:28:46.384033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.088 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.384200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.384226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.384372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.384397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.384575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.384600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.384775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.384800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.384948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.384974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.385122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.385148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.385323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.385350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.385527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.385552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.385732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.385758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.385929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.385955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.386110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.386136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.386322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.386348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.386526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.386552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.386694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.386720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.386894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.386920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.387102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.387128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.387272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.387298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.387483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.387509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.387684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.387709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.387898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.387924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.388103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.388129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.388279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.388305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.388476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.388502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.388659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.388684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.388892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.388918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.389091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.389117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.389302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.389327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.389509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.389535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.389703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.389729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.389900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.389926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.390129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.390154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.390304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.390329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.390490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.390515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.390663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.089 [2024-07-16 00:28:46.390689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.089 qpair failed and we were unable to recover it. 00:24:55.089 [2024-07-16 00:28:46.390835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.390860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.391027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.391053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.391223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.391251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.391425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.391450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.391604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.391630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.391807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.391833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.392029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.392055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.392229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.392255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.392407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.392433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.392610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.392636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.392782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.392807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.393005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.393031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.393182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.393208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.393384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.393410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.393554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.393580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.393754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.393780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.393922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.393948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.394095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.394120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.394304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.394329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.394485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.394510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.394659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.394684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.394827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.394852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.395027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.395052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.395201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.395226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.395373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.395399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.395548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.395573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.395747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.395772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.395925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.395949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.396117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.396142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.396293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.396318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.396496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.396522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.396700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.396726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.396883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.396913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.397092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.397117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.397262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.397287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.397464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.397490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.397643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.397668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.397815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.397840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.397998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.398023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.398238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.090 [2024-07-16 00:28:46.398264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.090 qpair failed and we were unable to recover it. 00:24:55.090 [2024-07-16 00:28:46.398468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.398494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.398646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.398671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.398855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.398886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.399038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.399063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.399205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.399230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.399409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.399434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.399595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.399621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.399777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.399802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.399959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.399984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.400160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.400185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.400367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.400393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.400566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.400591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.400764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.400789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.400943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.400969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.401144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.401170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.401345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.401370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.401516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.401541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.401694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.401719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.401864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.401894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.402085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.402110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.402275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.402300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.402462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.402487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.402643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.402669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.402848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.402874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.403033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.403058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.403205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.403231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.403394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.403421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.403565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.403591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.403743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.403768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.403921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.403947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.404114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.404140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.404303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.404328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.404477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.404503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.404661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.404691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.404837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.404863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.405045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.405070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.405249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.405275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.405450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.405476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.405647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.405672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.405826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.405851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.406034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.091 [2024-07-16 00:28:46.406060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.091 qpair failed and we were unable to recover it. 00:24:55.091 [2024-07-16 00:28:46.406207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.406232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.406406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.406431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.406581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.406608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.406780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.406806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.406964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.406990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.407162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.407187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.407366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.407392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.407564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.407589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.407765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.407790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.407986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.408012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.408187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.408212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.408361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.408387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.408536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.408561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.408738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.408763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.408913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.408939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.409113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.409138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.409297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.409323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.409495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.409521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.409667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.409692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.409842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.409871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.410048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.410074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.410229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.410254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.410404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.410429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.410586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.410612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.410789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.410814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.410976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.411002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.411152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.411178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.411333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.411358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.411525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.411550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.411723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.411748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.411930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.411956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.412114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.412140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.412341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.412366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.412525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.412550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.412729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.412754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.412904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.412934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.413112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.413138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.413291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.413318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.413517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.413543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.413687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.413713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.413887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.092 [2024-07-16 00:28:46.413912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.092 qpair failed and we were unable to recover it. 00:24:55.092 [2024-07-16 00:28:46.414057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.414082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.414269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.414294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.414443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.414468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.414640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.414665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.414839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.414865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.415072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.415098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.415275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.415300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.415507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.415532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.415677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.415702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.415873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.415905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.416078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.416104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.416277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.416302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.416471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.416496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.416696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.416721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.416899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.416925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.417103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.417129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.417335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.417360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.417536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.417561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.417731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.417756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.417906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.417944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.418099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.418125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.418308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.418333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.418532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.418558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.418705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.418730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.418915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.418941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.419094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.419119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.419295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.419321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.419475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.419501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.419654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.419679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.419853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.419883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.420067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.420093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.420250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.420276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.420455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.420480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.420664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.420689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.420833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.420859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.421055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.421080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.421265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.421291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.421470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.421496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.421647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.421673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.421812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.421838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.422026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.093 [2024-07-16 00:28:46.422052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.093 qpair failed and we were unable to recover it. 00:24:55.093 [2024-07-16 00:28:46.422198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.422223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.422377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.422403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.422577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.422602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.422781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.422808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.422964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.422990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.423144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.423173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.423346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.423372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.423548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.423574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.423721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.423746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.423888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.423914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.424085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.424110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.424278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.424303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.424472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.424497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.424677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.424702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.424847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.424873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.425049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.425077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.425250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.425275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.425444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.425470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.425648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.425674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.425849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.425874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.426087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.426113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.426320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.426345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.426523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.426549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.426749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.426774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.426928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.426954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.427117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.427143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.427314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.427339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.427514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.427539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.427716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.427742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.427920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.427946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.428101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.428126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.428281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.428306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.428451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.428477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.428634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.428660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.428831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.094 [2024-07-16 00:28:46.428857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.094 qpair failed and we were unable to recover it. 00:24:55.094 [2024-07-16 00:28:46.429055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.429081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.429250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.429276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.429417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.429442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.429586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.429611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.429767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.429792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.429933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.429959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.430147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.430173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.430347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.430372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.430518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.430544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.430689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.430714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.430869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.430906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.431051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.431081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.431245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.431271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.431440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.431465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.431636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.431661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.431861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.431893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.432066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.432091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.432238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.432264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.432413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.432439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.432618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.432644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.432816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.432841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.432991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.433016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.433155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.433180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.433331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.433356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.433535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.433560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.433764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.433790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.433943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.433970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.434159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.434184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.434339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.434364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.434514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.434540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.434692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.434718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.434897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.434923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.435103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.435129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.435279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.435305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.435478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.435503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.435661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.435687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.435864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.435895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.436048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.436073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.436250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.436280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.436433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.436459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.436628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.095 [2024-07-16 00:28:46.436654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.095 qpair failed and we were unable to recover it. 00:24:55.095 [2024-07-16 00:28:46.436820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.436845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.437050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.437076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.437272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.437298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.437445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.437471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.437616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.437641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.437815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.437841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.437991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.438017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.438160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.438186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.438323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.438348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.438491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.438516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.438682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.438707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.438885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.438911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.439091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.439116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.439283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.439308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.439489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.439515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.439659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.439684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.439854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.439885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.440093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.440119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.440261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.440286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.440465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.440490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.440692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.440717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.440918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.440944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.441115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.441140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.441309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.441334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.441501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.441526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.441701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.441726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.441874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.441903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.442089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.442114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.442259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.442284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.442458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.442483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.442661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.442686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.442830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.442855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.443004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.443030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.443220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.443246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.443423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.443448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.443603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.443628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.443776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.443802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.443945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.443971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.444157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.444186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.444360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.444385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.444556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.444582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.096 qpair failed and we were unable to recover it. 00:24:55.096 [2024-07-16 00:28:46.444758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.096 [2024-07-16 00:28:46.444783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.444951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.444977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.445148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.445174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.445326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.445351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.445533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.445558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.445707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.445733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.445913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.445939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.446084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.446110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.446255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.446280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.446457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.446482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.446639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.446665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.446835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.446861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.447019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.447045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.447214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.447239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.447414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.447440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.447585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.447611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.447791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.447816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.447996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.448022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.448176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.448201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.448348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.448373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.448551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.448576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.448775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.448800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.449056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.449081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.449237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.449263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.449464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.449489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.449673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.449698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.449900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.449926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.450079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.450106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.450283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.450308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.450453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.450478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.450657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.450682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.450903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.450929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.451132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.451157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.451305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.451331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.451500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.451525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.451680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.451705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.451880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.451906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.452057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.452082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.452237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.452262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.452444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.452469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.452636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.452662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.452839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.097 [2024-07-16 00:28:46.452865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.097 qpair failed and we were unable to recover it. 00:24:55.097 [2024-07-16 00:28:46.453019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.453045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.453224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.453250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.453430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.453455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.453633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.453658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.453802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.453828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.454005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.454032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.454205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.454231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.454403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.454428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.454576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.454601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.454753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.454780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.454981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.455007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.455178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.455203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.455349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.455375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.455549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.455575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.455720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.455745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.455890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.455916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.456068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.456094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.456270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.456296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.456440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.456465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.456616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.456642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.456799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.456824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.456974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.457000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.457145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.457171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.457347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.457377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.457553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.457579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.457755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.457781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.457959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.457985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.458159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.458184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.458334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.458359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.458535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.458562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.458736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.458962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.458988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.459157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.459183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.459384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.459409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.459588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.459613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.459753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.098 [2024-07-16 00:28:46.459778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.098 qpair failed and we were unable to recover it. 00:24:55.098 [2024-07-16 00:28:46.459927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.459953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.460162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.460189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.460335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.460361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.460513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.460538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.460749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.460774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.460955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.460981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.461156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.461181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.461326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.461352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.461550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.461575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.461774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.461800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.461950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.461976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.462155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.462181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.462354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.462379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.462550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.462576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.462778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.462804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.462979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.463005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.463185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.463211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.463386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.463413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.463599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.463625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.463771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.463796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.463947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.463973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.464150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.464176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.464384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.464410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.464561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.464587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.464753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.464778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.464950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.464976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.465167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.465193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.465341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.465367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.465541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.465567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.465739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.465765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.465922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.465948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.466135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.466161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.466306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.466331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.466506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.466531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.466712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.466738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.466916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.466942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.467117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.467142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.467325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.467352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.467528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.467554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.467732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.467758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.467906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.467932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.099 qpair failed and we were unable to recover it. 00:24:55.099 [2024-07-16 00:28:46.468089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.099 [2024-07-16 00:28:46.468114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.468261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.468287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.468487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.468513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.468715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.468741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.468922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.468947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.469123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.469149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.469323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.469348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.469559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.469584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.469755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.469780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.469926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.469952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.470128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.470154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.470316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.470342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.470496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.470522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.470670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.470695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.470870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.470917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.471061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.471087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.471237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.471263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.471434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.471459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.471631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.471656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.471831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.471857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.472036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.472062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.472204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.472229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.472374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.472399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.472573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.472598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.472771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.472797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.472996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.473022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.473171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.473197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.473345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.473371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.473555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.473581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.473760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.473786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.473943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.473969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.474136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.474161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.474337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.474362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.474536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.474561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.474762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.474788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.474933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.474959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.475097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.475123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.475300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.475326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.475496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.475521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.475670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.475695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.475862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.475892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.100 [2024-07-16 00:28:46.476074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.100 [2024-07-16 00:28:46.476099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.100 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.476254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.476279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.476423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.476449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.476627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.476652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.476832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.476858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.477066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.477092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.477259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.477284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.477458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.477484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.477661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.477686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.477837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.477863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.478016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.478041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.478187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.478212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.478414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.478440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.478614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.478639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.478806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.478836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.479005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.479032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.479206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.479231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.479407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.479433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.479634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.479659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.479831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.479856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.480016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.480042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.480194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.480220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.480391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.480417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.480590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.480617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.480785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.480811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.480984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.481010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.481186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.481212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.481391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.481416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.481622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.481648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.481822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.481847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.481997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.482022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.482201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.482226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.482406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.482431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.482610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.482635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.482786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.482812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.482978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.483003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.483181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.483207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.483353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.483378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.483547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.483572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.483722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.483748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.483930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.483956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.484122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.101 [2024-07-16 00:28:46.484152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.101 qpair failed and we were unable to recover it. 00:24:55.101 [2024-07-16 00:28:46.484333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.484359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.484508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.484533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.484740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.484765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.484920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.484946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.485093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.485119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.485295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.485320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.485475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.485500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.485674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.485699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.485848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.485874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.486059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.486085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.486256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.486281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.486426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.486453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.486601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.486627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.486828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.486854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.487013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.487039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.487240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.487266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.487435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.487461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.487614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.487640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.487812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.487837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.488012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.488038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.488218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.488244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.488442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.488467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.488637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.488663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.488842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.488867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.489074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.489099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.489272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.489297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.489478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.489503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.489719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.489744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.489942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.489968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.490176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.490201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.490377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.490403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.490557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.490583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.490740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.490765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.490947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.490973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.102 [2024-07-16 00:28:46.491119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.102 [2024-07-16 00:28:46.491145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.102 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.491343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.491368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.491514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.491539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.491717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.491743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.491912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.491937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.492113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.492140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.492308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.492338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.492515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.492540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.492685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.492710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.492891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.492916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.493093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.493118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.493266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.493292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.493467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.493493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.493692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.493718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.493870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.493902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.494060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.494085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.494261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.494286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.494436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.494461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.494626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.494652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.494804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.494829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.495052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.495078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.495258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.495283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.495433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.495458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.495605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.495630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.495806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.495831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.496008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.496034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.496208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.496233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.496389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.496414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.496560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.496585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.496762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.496787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.496943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.496969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.497166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.497191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.497345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.497370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.497544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.497574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.497740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.497766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.497918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.497944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.498123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.498148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.498302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.498328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.498508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.498533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.498734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.498759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.498939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.498965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.103 [2024-07-16 00:28:46.499141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.103 [2024-07-16 00:28:46.499167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.103 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.499312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.499337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.499517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.499543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.499689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.499714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.499864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.499895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.500075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.500100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.500274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.500300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.500477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.500502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.500657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.500683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.500829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.500854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.501036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.501062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.501236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.501261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.501439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.501464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.501641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.501667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.501814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.501839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.501993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.502019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.502169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.502194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.502366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.502391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.502543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.502568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.502726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.502751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.502959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.502985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.503125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.503151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.503347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.503372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.503548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.503574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.503747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.503772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.503957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.503983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.504148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.504173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.504320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.504345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.504521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.504546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.504720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.504746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.504926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.504952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.505131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.505156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.505309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.505334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.505500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.505529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.505701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.505727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.505904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.505930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.506079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.506104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.506251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.506276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.506462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.506488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.506633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.506658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.506832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.506857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.104 qpair failed and we were unable to recover it. 00:24:55.104 [2024-07-16 00:28:46.507037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.104 [2024-07-16 00:28:46.507062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.507211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.507237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.507405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.507431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.507572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.507597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.507796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.507821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.507981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.508007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.508191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.508216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.508386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.508411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.508544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.508569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.508767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.508792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.508994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.509020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.509194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.509219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.509361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.509386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.509596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.509622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.509787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.509813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.509968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.509993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.510163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.510189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.510336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.510361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.510512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.510537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.510725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.510754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.510956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.510982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.511126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.511151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.511321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.511346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.511490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.511517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.511668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.511694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.511869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.511901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.512074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.512099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.512275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.512300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.512471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.512496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.512666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.512691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.512861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.512892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.513070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.513095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.513276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.513301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.513485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.513511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.513707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.513733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.513887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.513912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.514088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.514113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.514300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.514325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.514535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.514560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.514725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.514750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.514920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.105 [2024-07-16 00:28:46.514946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.105 qpair failed and we were unable to recover it. 00:24:55.105 [2024-07-16 00:28:46.515119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.515144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.515313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.515339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.515497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.515524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.515702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.515728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.515904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.515930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.516085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.516110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.516290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.516315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.516490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.516515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.516692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.516718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.516868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.516905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.517082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.517108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.517307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.517332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.517528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.517553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.517697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.517722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.517901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.517927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.518074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.518100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.518279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.518305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.518461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.518486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.518640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.518666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.518874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.518909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.519056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.519082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.519259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.519284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.519438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.519463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.519648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.519673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.519850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.519882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.520056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.520081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.520257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.520282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.520458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.520484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.520635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.520661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.520806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.520832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.521018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.521044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.521186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.521212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.521363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.521388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.521567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.521593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.521793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.521819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.522007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.522033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.522176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.522202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.522374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.522400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.522555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.522580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.106 qpair failed and we were unable to recover it. 00:24:55.106 [2024-07-16 00:28:46.522779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.106 [2024-07-16 00:28:46.522805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.523006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.523032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.523189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.523218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.523397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.523423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.523571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.523596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.523740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.523766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.523944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.523970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.524122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.524147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.524330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.524357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.524558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.524584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.524730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.524755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.524909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.524935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.525113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.525139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.525307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.525333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.525509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.525535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.525702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.525727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.525897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.525924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.526093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.526119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.526295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.526321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.526502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.526528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.526696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.526722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.526903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.526929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.527075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.527100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.527298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.527324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.527528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.527553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.527702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.527728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.527928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.527954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.528129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.528154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.528333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.528358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.528503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.528528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.528710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.528735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.528898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.528924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.529104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.529130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.529295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.529321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.529473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.529498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.529684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.529709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.529854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.529892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.107 [2024-07-16 00:28:46.530066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.107 [2024-07-16 00:28:46.530091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.107 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.530240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.530266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.530432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.530457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.530603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.530628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.530778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.530803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.530984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.531010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.531149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.531175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.531363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.531388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.531568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.531593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.531795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.531821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.532012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.532041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.532203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.532235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.532420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.532448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.532646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.532671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.532827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.532852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.533032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.533061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.533222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.533250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.533446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.533472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.533617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.533642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.533799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.533824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.533998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.534024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.534170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.534196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.534417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.534463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.534765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.534794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.534987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.535016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.535192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.535218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.535408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.535437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.535624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.535845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.535873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.536088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.536113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.536349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.536393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.536587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.536615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.536783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.536811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.537019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.537045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.537262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.537294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.537498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.537526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.537722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.537750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.537948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.537974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.538178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.538207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.538413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.538439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.538586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.538812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.108 [2024-07-16 00:28:46.538837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.108 qpair failed and we were unable to recover it. 00:24:55.108 [2024-07-16 00:28:46.538986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.539012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.539210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.539239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.539461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.539490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.539661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.539686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.539885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.539914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.540076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.540104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.540301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.540330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.540529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.540554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.540723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.540748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.540948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.540977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.541170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.541205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.541371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.541397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.541591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.541637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.541871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.541908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.542088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.542116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.542309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.542335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.542516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.542541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.542736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.542766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.542937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.542967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.543127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.543153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.543342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.543370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.543536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.543564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.543762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.543787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.543969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.543995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.544144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.544170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.544399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.544445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.544638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.544666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.544840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.544866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.545115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.545144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.545340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.545365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.545585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.545613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.545838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.545863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.546045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.546070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.546267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.546295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.546492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.546521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.546713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.546738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.546985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.547033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.547229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.547262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.547429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.547459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.547684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.547710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.547887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.109 [2024-07-16 00:28:46.547913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.109 qpair failed and we were unable to recover it. 00:24:55.109 [2024-07-16 00:28:46.548104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.548351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.548379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.548567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.548592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.548790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.548819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.549031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.549057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.549213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.549239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.549412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.549438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.549674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.549720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.549942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.549971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.550157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.550186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.550417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.550443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.550646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.550672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.550866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.550904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.551127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.551155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.551329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.551354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.551621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.551680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.551873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.551906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.552106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.552134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.552361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.552387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.552599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.552645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.552839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.552867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.553076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.553105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.553329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.553354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.553513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.553538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.553717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.553743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.553921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.553951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.554172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.554198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.554377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.554403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.554560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.554585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.554735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.554778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.554978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.555004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.555184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.555210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.555390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.555415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.555593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.555633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.555805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.555830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.556006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.556033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.556234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.556263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.556460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.556493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.556652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.556678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.556891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.556921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.557095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.110 [2024-07-16 00:28:46.557124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.110 qpair failed and we were unable to recover it. 00:24:55.110 [2024-07-16 00:28:46.557342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.557371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.557544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.557569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.557734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.557761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.557976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.558005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.558168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.558197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.558357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.558383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.558532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.558558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.558733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.558758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.558928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.558957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.559156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.559182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.559367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.559393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.559624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.559653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.559824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.559853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.560035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.560061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.560256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.560284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.560451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.560479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.560647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.560675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.560853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.560884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.561064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.561090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.561266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.561294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.561488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.561516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.561688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.561715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.561898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.561928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.562121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.562154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.562363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.562392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.562594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.562620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.562762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.562787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.562935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.562961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.563184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.563213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.563421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.563446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.563617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.563642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.563860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.563894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.564090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.564118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.564340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.564366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.564590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.564618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.111 qpair failed and we were unable to recover it. 00:24:55.111 [2024-07-16 00:28:46.564851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.111 [2024-07-16 00:28:46.564890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.565117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.565142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.565323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.565349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.565546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.565574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.565738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.565765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.565982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.566012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.566211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.566236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.566441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.566470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.566670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.566696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.566871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.566902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.567076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.567101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.567302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.567334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.567536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.567565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.567791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.567817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.567990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.568015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.568192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.568220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.568379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.568408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.568627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.568655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.568852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.568907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.569147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.569173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.569328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.569353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.569552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.569580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.569774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.569800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.570022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.570050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.570271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.570299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.570532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.570557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.570702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.570728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.570920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.570949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.571167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.571193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.571416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.571449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.571644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.571670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.571835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.571885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.572081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.572107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.572303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.572332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.572542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.572568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.572765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.572794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.572989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.573014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.573190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.573216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.573369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.573395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.573542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.573568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.573711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.573736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.112 qpair failed and we were unable to recover it. 00:24:55.112 [2024-07-16 00:28:46.573926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.112 [2024-07-16 00:28:46.573955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.574131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.574157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.574357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.574384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.574597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.574623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.574813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.574841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.575074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.575100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.575298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.575327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.575544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.575572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.575737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.575765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.575944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.575970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.576120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.576146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.576347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.576375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.576568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.576597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.576769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.576795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.576996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.577025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.577192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.577227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.577433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.577462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.577634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.577659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.577860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.577899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.578059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.578087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.578289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.578317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.578492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.578518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.578699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.578750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.578927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.578956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.579151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.579179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.579374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.579400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.579552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.579578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.579724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.579749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.579974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.580003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.580235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.580261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.580449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.580475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.580626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.580652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.580832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.580858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.581086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.581112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.581287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.581316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.581535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.581563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.581764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.581793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.581967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.581993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.582151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.582177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.582368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.582397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.582592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.113 [2024-07-16 00:28:46.582620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.113 qpair failed and we were unable to recover it. 00:24:55.113 [2024-07-16 00:28:46.582792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.582818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.582995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.583022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.583203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.583231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.583454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.583480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.583650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.583676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.583872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.583909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.584079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.584107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.584306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.584332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.584485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.584510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.584731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.584760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.584963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.584992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.585158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.585186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.585357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.585382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.585538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.585563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.585744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.585770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.585947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.585983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.586153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.586178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.586355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.586384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.586578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.586608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.586810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.586838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.587044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.587070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.587332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.587388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.587554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.587582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.587772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.587800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.587981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.588008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.588203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.588232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.588396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.588425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.588580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.588608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.588806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.588831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.589041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.589070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.589265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.589293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.589512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.589540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.589739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.589764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.589963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.589992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.590160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.590188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.590379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.590408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.590603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.590628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.590796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.590824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.590999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.591027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.591252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.591280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.114 [2024-07-16 00:28:46.591448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.114 [2024-07-16 00:28:46.591473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.114 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.591650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.591679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.591872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.591908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.592079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.592105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.592255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.592280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.592445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.592473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.592667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.592696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.592907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.592937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.593113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.593139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.593332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.593360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.593567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.593592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.593739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.593764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.593915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.593942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.594118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.594144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.594353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.594379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.594521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.594546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.594753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.594779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.594981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.595010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.595177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.595206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.595367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.595395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.595561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.595586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.595781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.595809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.595974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.596002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.596198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.596227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.596422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.596447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.596608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.596637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.596840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.596866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.597068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.597097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.597301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.597326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.597500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.597529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.597727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.597755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.597959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.597989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.598150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.598176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.598335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.598364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.598521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.598549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.598720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.115 [2024-07-16 00:28:46.598749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.115 qpair failed and we were unable to recover it. 00:24:55.115 [2024-07-16 00:28:46.598940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.598966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.599130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.599158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.599317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.599346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.599531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.599559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.599745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.599771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.599949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.599978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.600140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.600169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.600388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.600421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.600621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.600647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.600813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.600841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.601037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.601066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.601225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.601254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.601457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.601482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.601648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.601677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.601887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.601916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.602109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.602153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.602356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.602382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.602561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.602612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.602829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.602854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.603036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.603062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.603246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.603271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.603450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.603479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.603681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.603706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.603904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.603933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.604109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.604134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.604296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.604325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.604550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.604575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.604764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.604793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.604995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.605021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.605205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.605233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.605396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.605425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.605591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.605619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.605837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.605863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.606079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.606108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.606281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.606307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.606458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.606501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.606698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.606723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.606883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.606908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.607056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.607081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.607288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.607329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.607515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.116 [2024-07-16 00:28:46.607544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.116 qpair failed and we were unable to recover it. 00:24:55.116 [2024-07-16 00:28:46.607717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.607745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.607950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.607975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.608168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.608196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.608367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.608393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.608585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.608613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.608805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.608834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.609073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.609102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.609274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.609300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.609462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.609490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.609685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.609713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.609915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.609941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.610118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.610144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.610360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.610409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.610585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.610613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.610808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.610836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.611066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.611092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.611331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.611375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.611543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.611571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.611787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.611816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.611994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.612020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.612190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.612218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.612440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.612469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.612628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.612656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.612830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.612856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.613013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.613054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.613222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.613250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.613461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.613489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.613658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.613683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.613854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.613890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.614049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.614077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.614252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.614280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.614475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.614501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.614670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.614699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.614855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.614892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.615088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.615120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.615313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.615339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.615509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.615537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.615706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.615734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.615898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.615928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.616103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.616130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.616299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.616327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.117 qpair failed and we were unable to recover it. 00:24:55.117 [2024-07-16 00:28:46.616481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.117 [2024-07-16 00:28:46.616509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.616710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.616738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.616934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.616961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.617112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.617138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.617341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.617369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.617555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.617583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.617768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.617796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.617978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.618004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.618174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.618202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.618364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.618392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.618584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.618610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.618807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.618836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.619017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.619043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.619224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.619250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.619448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.619473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.619640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.619668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.619830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.619855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.620039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.620065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.620266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.620291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.620466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.620494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.620678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.620706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.620902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.620931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.621125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.621151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.621323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.621352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.621538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.621567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.621736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.621765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.621935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.621961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.622114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.622139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.622342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.622371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.622564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.622592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.622784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.622809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.622999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.623028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.623263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.623289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.623441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.623466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.623615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.623645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.623818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.623843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.623999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.624025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.624205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.624231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.624410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.624435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.624585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.624610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.624770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.118 [2024-07-16 00:28:46.624799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.118 qpair failed and we were unable to recover it. 00:24:55.118 [2024-07-16 00:28:46.624968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.624997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.625199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.625226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.625401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.625429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.625622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.625652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.625809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.625837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.626015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.626041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.626230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.626258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.626447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.626475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.626638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.626666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.626830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.626856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.627030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.627059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.627217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.627245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.627428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.627456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.627652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.627677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.627885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.627914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.628108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.628136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.628302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.628330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.628496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.628523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.628745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.628773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.628955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.628984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.629175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.629210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.629406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.629432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.629604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.629632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.629822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.629850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.630057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.630083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.630255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.630280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.630455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.630483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.630676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.630705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.630860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.630898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.631075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.631101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.631260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.631285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.631433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.631458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.631624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.631650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.631799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.631824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.632025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.632054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.632214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.632243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.632444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.632470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.119 [2024-07-16 00:28:46.632665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.119 [2024-07-16 00:28:46.632690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.119 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.632842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.632867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.633051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.633077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.633290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.633315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.633487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.633513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.633680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.633707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.633914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.633940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.634124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.634149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.634359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.634385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.634557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.634584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.634737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.634764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.634926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.634954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.635126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.635153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.635345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.635373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.635526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.635554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.635707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.635734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.635901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.635927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.636124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.636150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.636347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.636375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.636528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.636555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.636719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.636745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.636946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.636974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.637183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.637209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.637411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.637437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.637620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.637649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.637808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.637833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.638033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.638061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.638220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.638246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.638442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.638467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.638657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.638684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.638840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.638866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.639026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.639053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.639206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.639231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.639419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.639445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.639599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.639626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.639811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.639837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.640029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.640055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.640251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.640278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.640434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.640461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.120 qpair failed and we were unable to recover it. 00:24:55.120 [2024-07-16 00:28:46.640610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.120 [2024-07-16 00:28:46.640636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.640826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.640851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.641034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.641060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.641241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.641267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.641411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.641437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.641615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.641641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.641797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.641823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.641994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.642022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.642215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.642243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.642414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.642440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.642601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.642630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.642849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.642884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.643087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.643117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.643332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.643357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.643570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.643638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.643808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.643838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.644026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.644052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.644227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.644253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.644561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.644617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.644785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.644813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.645012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.645041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.645235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.645261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.645449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.645476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.645684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.645712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.645919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.645946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.646123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.646149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.646336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.646361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.646555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.646583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.646803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.646831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.647012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.647038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.647225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.647278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.647470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.647498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.647687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.647717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.647924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.647950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.648136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.648161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.648362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.648390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.648583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.648611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.648807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.648834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.649049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.649078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.649299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.649327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.649520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.649549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.121 qpair failed and we were unable to recover it. 00:24:55.121 [2024-07-16 00:28:46.649740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.121 [2024-07-16 00:28:46.649766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.649968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.649997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.650166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.650196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.650410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.650438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.650665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.650690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.650899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.650925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.651094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.651120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.651317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.651345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.651539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.651564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.651745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.651771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.651936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.651965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.652157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.652185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.652377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.652406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.652604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.652655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.652894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.652924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.653142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.653170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.653338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.653363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.653560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.653611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.653807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.653835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.654022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.654048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.654212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.654238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.654502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.654554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.654724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.654752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.654947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.654974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.655143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.655168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.655379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.655407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.655575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.655603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.655821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.655849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.656079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.656105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.656369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.656397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.656582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.656611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.656829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.656857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.657067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.657093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.657249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.657274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.657477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.657503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.657711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.657739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.657963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.657989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.658219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.658247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.658448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.658476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.658660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.658694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.658892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.122 [2024-07-16 00:28:46.658919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.122 qpair failed and we were unable to recover it. 00:24:55.122 [2024-07-16 00:28:46.659115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.659142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.659319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.659346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.659546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.659575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.659780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.659806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.659983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.660012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.660203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.660232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.660428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.660455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.660627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.660653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.660820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.660849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.661035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.661064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.661220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.661249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.661409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.661435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.661620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.661675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.661847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.661882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.662070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.662099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.662300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.662326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.662477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.662503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.662678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.662705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.662914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.662944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.663139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.663165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.663426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.663478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.663644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.663673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.663865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.663900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.664099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.664124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.664360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.664385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.664528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.664553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.664732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.664760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.664966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.664992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.665188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.665233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.665401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.665429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.665604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.665630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.665827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.665852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.666038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.666066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.666255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.666284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.666485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.666513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.666687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.666713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.666902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.666929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.667135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.667163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.667320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.667349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.667525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.667555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.667724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.667749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.123 [2024-07-16 00:28:46.667935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.123 [2024-07-16 00:28:46.667964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.123 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.668161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.668189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.668388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.668414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.668610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.668638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.668864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.668896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.669076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.669102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.669253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.669278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.669501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.669550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.669705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.669733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.669969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.669998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.670232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.670258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.670428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.670457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.670646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.670674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.670884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.670910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.671067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.671092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.671320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.671370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.671569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.671597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.671786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.671814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.672013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.672040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.672306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.672358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.672557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.672586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.672770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.672798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.672996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.673024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.673253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.673304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.673495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.673523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.673720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.673745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.673958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.673984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.674233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.674262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.674430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.674458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.674649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.674677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.674900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.674926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.675112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.675161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.675344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.675372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.675566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.675594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.675819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.675844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.676058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.124 [2024-07-16 00:28:46.676087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.124 qpair failed and we were unable to recover it. 00:24:55.124 [2024-07-16 00:28:46.676286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.676314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.676507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.676535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.676727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.676753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.676979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.677005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.677160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.677186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.677360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.677388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.677591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.677617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.677768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.677794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.677999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.678027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.678203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.678232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.678456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.678481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.678662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.678687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.678861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.678897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.679100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.679128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.679356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.679381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.679633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.679658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.679807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.679833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.680040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.680066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.680218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.680244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.680445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.680496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.680722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.680747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.680903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.680929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.681106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.681132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.681446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.681502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.681717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.681745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.681919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.681949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.682143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.682168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.682367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.682417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.682613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.682642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.682839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.682866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.683066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.683096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.683261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.683318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.683557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.683586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.683790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.683818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.683989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.684015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.684206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.684235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.684429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.684457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.125 [2024-07-16 00:28:46.684623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.125 [2024-07-16 00:28:46.684651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.125 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.684816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.684842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.685027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.685053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.685192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.685217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.685437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.685466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.685655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.685681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.685889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.685918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.686083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.686111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.686303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.686331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.686497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.686524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.686775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.686827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.687035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.687060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.687234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.687260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.687413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.687438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.687694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.687744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.687943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.687972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.688188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.404 [2024-07-16 00:28:46.688216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.404 qpair failed and we were unable to recover it. 00:24:55.404 [2024-07-16 00:28:46.688390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.688416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.688604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.688632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.688858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.688902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.689122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.689150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.689353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.689379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.689650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.689710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.689916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.689946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.690142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.690170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.690371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.690396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.690679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.690737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.690937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.690963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.691135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.691164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.691361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.691386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.691563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.691588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.691732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.691757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.691939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.691965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.692152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.692178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.692411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.692468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.692688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.692717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.692889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.692918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.693088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.693114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.693296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.693321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.693476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.693501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.693683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.693711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.693903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.693929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.694129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.694182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.694399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.694427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.694619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.694647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.694843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.694868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.695045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.695073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.695292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.695321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.695539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.695568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.695743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.695769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.695965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.695995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.696212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.696241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.696444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.696469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.696645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.696670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.405 [2024-07-16 00:28:46.696840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.405 [2024-07-16 00:28:46.696869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.405 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.697089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.697118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.697276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.697304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.697501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.697527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.697773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.697825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.698046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.698072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.698247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.698272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.698420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.698449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.698590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.698616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.698807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.698835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.699046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.699071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.699272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.699297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.699440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.699466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.699639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.699664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.699836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.699862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.700040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.700066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.700232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.700260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.700415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.700443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.700665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.700690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.700836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.700862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.701101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.701130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.701341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.701369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.701538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.701566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.701755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.701780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.701971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.702000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.702166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.702195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.702380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.702408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.702580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.702606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.702800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.702828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.702991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.703020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.703208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.703236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.703411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.703437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.703593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.703621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.703788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.703814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.703969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.703995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.704170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.704196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.704382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.704432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.704620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.704648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.704848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.704884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.705112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.705137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.705319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.705344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.705545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.406 [2024-07-16 00:28:46.705572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.406 qpair failed and we were unable to recover it. 00:24:55.406 [2024-07-16 00:28:46.705797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.705825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.706007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.706033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.706224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.706252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.706408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.706436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.706632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.706660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.706884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.706909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.707083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.707115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.707309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.707338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.707532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.707560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.707751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.707777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.707957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.707983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.708178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.708206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.708428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.708456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.708664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.708689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.708862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.708907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.709112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.709137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.709342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.709383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.709557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.709582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.709811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.709839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.710020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.710049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.710218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.710247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.710414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.710439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.710635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.710663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.710888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.710917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.711108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.711136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.711351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.711376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.711604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.711632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.711853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.711889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.712061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.712089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.712311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.712336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.712600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.712650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.712811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.712838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.713050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.713076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.713234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.713263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.713434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.713459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.713658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.713686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.713888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.713917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.714087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.714112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.714298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.714327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.714515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.714543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.714700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.407 [2024-07-16 00:28:46.714728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.407 qpair failed and we were unable to recover it. 00:24:55.407 [2024-07-16 00:28:46.714953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.714979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.715198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.715245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.715443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.715471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.715661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.715689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.715886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.715912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.716110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.716139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.716359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.716387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.716581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.716609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.716802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.716827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.717027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.717056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.717254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.717283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.717507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.717533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.717678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.717703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.717884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.717910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.718110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.718138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.718324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.718352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.718548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.718573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.718730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.718759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.718927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.718956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.719143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.719172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.719348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.719374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.719543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.719585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.719780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.719809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.719997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.720025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.720203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.720229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.720401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.720430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.720661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.720687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.720857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.720889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.721094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.721119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.721389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.721438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.721630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.721658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.721852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.721886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.722065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.722090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.722291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.722323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.722553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.722582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.408 qpair failed and we were unable to recover it. 00:24:55.408 [2024-07-16 00:28:46.722820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.408 [2024-07-16 00:28:46.722845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.723056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.723082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.723319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.723347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.723542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.723570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.723796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.723821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.724024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.724050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.724362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.724416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.724607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.724635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.724802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.724830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.725046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.725072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.725304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.725333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.725532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.725557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.725751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.725779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.725953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.725979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.726224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.726270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.726439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.726467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.726628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.726656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.726848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.726873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.727071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.727099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.727289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.727317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.727488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.727517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.727734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.727759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.727988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.728042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.728233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.728261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.728426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.728454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.728653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.728682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.728891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.728920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.729087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.729115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.729337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.729365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.729565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.729590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.729787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.729815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.730041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.730067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.730247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.730272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.730450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.730475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.730722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.730778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.730948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.730977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.731195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.731221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.731396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.731421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.731679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.731727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.731936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.731963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.409 qpair failed and we were unable to recover it. 00:24:55.409 [2024-07-16 00:28:46.732165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.409 [2024-07-16 00:28:46.732194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.732366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.732391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.732546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.732571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.732749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.732774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.732949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.732978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.733173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.733198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.733419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.733447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.733637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.733665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.733865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.733905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.734107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.734133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.734281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.734307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.734524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.734553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.734755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.734780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.734985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.735011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.735205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.735233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.735426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.735455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.735644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.735673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.735895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.735921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.736126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.736154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.736341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.736369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.736568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.736594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.736768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.736793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.736969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.736994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.737171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.737199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.737391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.737419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.737590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.737616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.737806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.737839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.738070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.738100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.738285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.738313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.738510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.738535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.738696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.738724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.738949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.738975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.739168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.739196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.739387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.739413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.739579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.739608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.739798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.739827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.740059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.740086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.740283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.740309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.740631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.740680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.740901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.740930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.741114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.741142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.410 [2024-07-16 00:28:46.741323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.410 [2024-07-16 00:28:46.741349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.410 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.741530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.741586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.741805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.741833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.742045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.742074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.742296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.742322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.742560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.742610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.742778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.742806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.743021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.743050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.743225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.743250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.743450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.743476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.743688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.743716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.743901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.743930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.744103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.744129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.744330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.744356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.744531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.744560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.744751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.744779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.744976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.745002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.745286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.745339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.745533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.745561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.745751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.745779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.746005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.746031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.746210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.746238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.746427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.746455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.746622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.746650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.746844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.746869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.747071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.747099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.747283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.747312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.747529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.747558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.747753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.747778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.747977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.748005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.748223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.748248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.748441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.748469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.748658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.748683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.748888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.748917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.749136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.749162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.749332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.749360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.749554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.749580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.749731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.749756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.749950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.749979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.750176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.750202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.750408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.750434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.411 [2024-07-16 00:28:46.750661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.411 [2024-07-16 00:28:46.750689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.411 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.750912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.750941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.751157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.751186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.751381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.751406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.751603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.751800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.751829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.752041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.752066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.752225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.752250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.752467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.752522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.752746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.752774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.752965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.752994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.753160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.753185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.753404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.753458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.753654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.753682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.753873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.753909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.754084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.754110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.754286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.754315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.754505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.754533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.754765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.754790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.754962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.754988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.755228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.755277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.755508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.755536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.755725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.755753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.755932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.755958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.756154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.756182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.756337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.756365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.756533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.756562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.756728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.756754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.756908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.756934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.757106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.757131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.757331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.757359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.757526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.757552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.757751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.757779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.757995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.758024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.758215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.758243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.758466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.758492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.758722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.758750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.758916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.758945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.759171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.759199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.759388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.759413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.759642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.759696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.759921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.412 [2024-07-16 00:28:46.759949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.412 qpair failed and we were unable to recover it. 00:24:55.412 [2024-07-16 00:28:46.760146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.760174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.760343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.760368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.760566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.760592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.760766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.760795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.760987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.761022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.761224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.761250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.761538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.761595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.761814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.761842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.762019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.762045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.762219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.762245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.762449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.762477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.762710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.762736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.762912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.762938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.763103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.763129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.763328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.763380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.763567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.763595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.763769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.763798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.764002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.764028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.764256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.764307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.764499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.764527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.764747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.764775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.764964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.764990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.765186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.765214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.765380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.765408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.765625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.765654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.765858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.765898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.766138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.766164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.766321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.766346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.766524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.766552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.766743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.766768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.766913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.766939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.767104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.767129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.767292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.767321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.767525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.767551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.767747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.413 [2024-07-16 00:28:46.767775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.413 qpair failed and we were unable to recover it. 00:24:55.413 [2024-07-16 00:28:46.767978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.768004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.768194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.768224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.768452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.768477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.768725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.768780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.768998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.769027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.769197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.769226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.769409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.769435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.769688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.769740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.769935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.769965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.770188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.770217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.770417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.770442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.770675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.770729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.770946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.770975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.771168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.771196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.771389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.771414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.771678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.771727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.771919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.771948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.772147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.772176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.772370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.772395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.772612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.772664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.772886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.772915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.773107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.773135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.773305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.773330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.773524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.773576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.773769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.773796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.773986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.774015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.774182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.774208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.774389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.774441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.774603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.774631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.774795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.774823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.775016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.775042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.775204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.775229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.775402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.775428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.775608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.775634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.775820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.775845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.776048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.776077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.776297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.776326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.776485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.776513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.776725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.776750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.776993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.777045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.777236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.777264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.414 [2024-07-16 00:28:46.777479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.414 [2024-07-16 00:28:46.777507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.414 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.777715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.777741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.777900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.777926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.778127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.778160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.778344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.778372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.778567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.778592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.778774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.778968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.778997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.779190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.779219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.779441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.779467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.779722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.779773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.779990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.780019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.780224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.780249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.780400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.780425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.780657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.780686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.780887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.780916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.781112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.781142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.781322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.781348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.781498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.781524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.781687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.781715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.781895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.781924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.782118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.782144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.782369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.782397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.782588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.782616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.782812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.782840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.783038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.783064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.783286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.783335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.783559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.783587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.783775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.783804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.783997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.784023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.784261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.784317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.784515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.784543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.784709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.784737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.784958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.784984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.785190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.785241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.785432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.785461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.785677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.785705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.785907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.785933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.786144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.786194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.786360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.786388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.786576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.786604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.786803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.415 [2024-07-16 00:28:46.786829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.415 qpair failed and we were unable to recover it. 00:24:55.415 [2024-07-16 00:28:46.786986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.787012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.787213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.787242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.787434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.787463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.787693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.787718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.787945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.787974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.788199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.788228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.788416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.788444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.788620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.788645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.788822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.788847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.789031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.789060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.789276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.789304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.789499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.789524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.789699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.789727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.789918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.789947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.790119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.790147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.790319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.790345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.790549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.790577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.790739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.790768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.790958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.790987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.791176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.791201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.791459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.791511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.791730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.791759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.791953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.791982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.792181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.792206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.792469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.792521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.792739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.792964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.792992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.793195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.793221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.793481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.793531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.793748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.793781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.794003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.794032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.794233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.794258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.794515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.794567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.794761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.794789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.795012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.795040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.795240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.795266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.795418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.795443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.795616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.795644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.795858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.795894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.796072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.796097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.796251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.796276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.416 qpair failed and we were unable to recover it. 00:24:55.416 [2024-07-16 00:28:46.796461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.416 [2024-07-16 00:28:46.796486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.796704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.796729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.796936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.796963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.797135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.797163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.797381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.797409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.797582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.797611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.797802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.797828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.797973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.797999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.798175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.798218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.798408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.798436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.798637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.798662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.798804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.798829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.799053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.799082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.799244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.799272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.799467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.799492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.799702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.799765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.799961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.799990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.800155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.800183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.800400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.800425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.800642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.800692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.800917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.800946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.801141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.801169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.801364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.801389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.801574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.801599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.801743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.801769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.801971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.801997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.802218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.802244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.802509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.802557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.802775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.802803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.802994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.803024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.803199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.803224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.803445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.803473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.803643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.803671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.803833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.803861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.804070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.804095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.804371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.804420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.417 qpair failed and we were unable to recover it. 00:24:55.417 [2024-07-16 00:28:46.804613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.417 [2024-07-16 00:28:46.804643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.804801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.804830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.805061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.805087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.805321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.805373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.805567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.805595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.805759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.805787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.805989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.806015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.806205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.806269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.806488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.806517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.806711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.806740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.806916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.806943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.807176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.807204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.807372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.807400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.807593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.807621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.807800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.807825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.808048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.808077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.808276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.808304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.808502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.808528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.808726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.808751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.808925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.808954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.809170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.809203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.809428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.809456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.809656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.809682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.809890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.809919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.810111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.810139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.810331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.810359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.810555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.810580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.810798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.810826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.810992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.811021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.811240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.811268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.811439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.811465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.811671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.811724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.811895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.811923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.812138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.812166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.812348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.812375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.812572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.812601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.812820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.812848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.813029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.813055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.813232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.813257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.813466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.813524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.813719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.813747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.418 [2024-07-16 00:28:46.813935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.418 [2024-07-16 00:28:46.813964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.418 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.814128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.814154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.814378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.814406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.814599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.814628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.814786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.814814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.815039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.815065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.815237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.815265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.815488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.815516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.815714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.815742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.815910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.815936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.816137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.816162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.816375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.816403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.816582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.816609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.816775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.816801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.816975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.817001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.817198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.817226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.817426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.817451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.817625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.817651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.817814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.817842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.818018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.818047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.818241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.818269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.818445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.818470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.818694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.818744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.818910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.818939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.819128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.819157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.819331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.819357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.819550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.819602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.819778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.819806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.819970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.819999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.820173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.820199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.820440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.820468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.820631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.820660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.820851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.820887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.821084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.821109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.821313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.821341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.821509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.821537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.821705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.821733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.821921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.821947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.822112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.822140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.822336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.822365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.822530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.822559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.822753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.419 [2024-07-16 00:28:46.822779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.419 qpair failed and we were unable to recover it. 00:24:55.419 [2024-07-16 00:28:46.822979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.823008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.823177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.823206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.823392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.823420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.823607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.823632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.823799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.823827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.824024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.824057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.824254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.824282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.824479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.824504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.824670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.824698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.824860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.824898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.825066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.825092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.825268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.825293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.825443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.825468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.825619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.825644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.825796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.825839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.826012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.826038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.826188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.826213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.826392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.826418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.826592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.826620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.826840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.826865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.827025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.827050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.827220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.827246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.827446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.827474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.827641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.827666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.827854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.827891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.828066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.828094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.828258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.828286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.828481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.828506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.828711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.828739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.828908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.828937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.829129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.829157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.829330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.829356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.829523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.829553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.829779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.829808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.829978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.830007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.830205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.830230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.830429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.830456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.830623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.830651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.830860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.830896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.831073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.831098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.831267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.831295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.420 qpair failed and we were unable to recover it. 00:24:55.420 [2024-07-16 00:28:46.831463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.420 [2024-07-16 00:28:46.831493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.831683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.831711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.831912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.831938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.832165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.832193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.832362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.832390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.832581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.832610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.832778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.832803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.833002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.833031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.833201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.833229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.833396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.833425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.833594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.833620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.833799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.833824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.834005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.834031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.834265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.834294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.834465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.834491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.834631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.834656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.834827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.834855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.835059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.835088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.835259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.835284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.835513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.835542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.835728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.835757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.835951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.835979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.836176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.836201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.836467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.836519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.836747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.836775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.836963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.836991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.837190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.837215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.837390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.837415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.837609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.837637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.837838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.837867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.838071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.838096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.838272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.838300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.838470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.838503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.838695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.838723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.838901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.838926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.839121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.421 [2024-07-16 00:28:46.839149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.421 qpair failed and we were unable to recover it. 00:24:55.421 [2024-07-16 00:28:46.839309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.839337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.839529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.839558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.839761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.839786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.839977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.840005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.840171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.840199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.840415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.840443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.840608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.840633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.840794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.840822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.841010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.841039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.841209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.841237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.841440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.841465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.841628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.841656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.841847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.841885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.842079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.842107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.842333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.842359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.842530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.842559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.842721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.842749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.842907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.842936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.843131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.843156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.843355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.843383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.843571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.843599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.843786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.843814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.843976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.844002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.844154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.844180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.844324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.844350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.844544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.844572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.844761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.844787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.844985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.845014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.845209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.845235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.845425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.845454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.845651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.845677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.845843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.845871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.846072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.846100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.846264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.846293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.846521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.846546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.846748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.846776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.846941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.846970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.847135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.847168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.847336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.847362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.847552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.847580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.847784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.847809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.422 [2024-07-16 00:28:46.847980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.422 [2024-07-16 00:28:46.848006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.422 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.848159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.848184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.848378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.848407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.848570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.848599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.848785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.848813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.849014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.849040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.849193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.849218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.849421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.849446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.849655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.849683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.849885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.849911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.850098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.850127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.850311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.850339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.850561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.850590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.850761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.850786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.850952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.850980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.851183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.851209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.851354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.851380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.851525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.851551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.851720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.851748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.851917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.851946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.852131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.852160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.852385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.852413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.852652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.852704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.852946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.852977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.853128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.853169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.853371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.853396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.853566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.853594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.853757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.853786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.853972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.854002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.854178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.854203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.854378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.854403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.854577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.854605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.854820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.854848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.855033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.855058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.855225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.855254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.855445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.855473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.855641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.855674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.855868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.855902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.856074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.856103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.856300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.856326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.856495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.856520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.856672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.423 [2024-07-16 00:28:46.856697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.423 qpair failed and we were unable to recover it. 00:24:55.423 [2024-07-16 00:28:46.856861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.856899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.857096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.857125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.857323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.857351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.857544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.857569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.857767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.857795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.857969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.857998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.858201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.858227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.858402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.858427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.858621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.858821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.858849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.859054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.859080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.859224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.859249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.859448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.859476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.859644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.859672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.859831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.859859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.860038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.860064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.860225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.860254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.860419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.860449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.860609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.860638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.860793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.860818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.861015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.861043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.861267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.861293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.861439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.861646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.861671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.861839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.861868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.862049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.862077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.862260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.862289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.862481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.862506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.862727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.862755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.862934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.862963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.863149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.863177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.863379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.863404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.863578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.863603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.863799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.863827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.863992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.864021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.864249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.864274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.864459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.864488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.864675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.864703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.864873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.864919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.865123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.865149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.865310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.865339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.424 [2024-07-16 00:28:46.865497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.424 [2024-07-16 00:28:46.865526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.424 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.865708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.865736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.865937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.865963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.866108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.866133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.866324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.866353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.866551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.866580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.866801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.866826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.867000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.867030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.867192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.867225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.867391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.867419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.867616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.867642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.867813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.867841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.868050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.868079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.868240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.868269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.868436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.868461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.868632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.868660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.868815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.868844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.869046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.869072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.869229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.869254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.869393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.869435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.869620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.869648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.869832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.869861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.870049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.870075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.870245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.870273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.870469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.870498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.870652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.870680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.870851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.870886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.871053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.871081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.871303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.871331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.871496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.871524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.871693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.871718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.871900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.871929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.872124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.872152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.872364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.872392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.872591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.872616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.872786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.872811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.873000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.873027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.873225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.873253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.873427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.873453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.873625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.873653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.873810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.873838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.425 [2024-07-16 00:28:46.874019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.425 [2024-07-16 00:28:46.874048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.425 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.874239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.874265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.874466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.874495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.874655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.874683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.874872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.874910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.875099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.875124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.875297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.875325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.875524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.875552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.875747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.875780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.875983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.876010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.876187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.876213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.876399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.876427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.876623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.876651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.876850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.876885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.877057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.877085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.877276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.877304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.877464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.877492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.877713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.877739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.877949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.877974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.878145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.878173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.878332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.878361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.878548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.878573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.878746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.878775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.878971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.879000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.879190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.879216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.879389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.879415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.879610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.879638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.879831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.879860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.880039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.880067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.880233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.880258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.880455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.880483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.880644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.880673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.880884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.880913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.881097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.881122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.881320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.881349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.881520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.426 [2024-07-16 00:28:46.881553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.426 qpair failed and we were unable to recover it. 00:24:55.426 [2024-07-16 00:28:46.881722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.881751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.881919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.881945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.882097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.882123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.882297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.882323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.882502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.882530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.882691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.882716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.882946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.882975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.883174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.883203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.883362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.883390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.883559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.883584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.883754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.883780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.883959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.883985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.884137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.884163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.884313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.884339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.884527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.884556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.884716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.884745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.884942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.884971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.885142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.885168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.885369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.885397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.885561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.885591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.885755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.885784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.885982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.886009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.886184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.886212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.886404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.886433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.886651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.886680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.886848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.886874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.887071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.887096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.887295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.887323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.887483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.887511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.887710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.887735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.887913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.887939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.888145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.888173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.888392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.888420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.888609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.888635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.888779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.888804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.888978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.889005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.889178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.889206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.889398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.889423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.889585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.889612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.889835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.889860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.890048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.427 [2024-07-16 00:28:46.890077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.427 qpair failed and we were unable to recover it. 00:24:55.427 [2024-07-16 00:28:46.890262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.890288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.890455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.890483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.890633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.890660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.890821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.890849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.891022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.891049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.891205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.891230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.891403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.891428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.891605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.891631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.891775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.891800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.891996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.892024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.892260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.892288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.892493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.892519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.892687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.892713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.892894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.892923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.893160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.893185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.893386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.893415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.893588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.893613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.893801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.893829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.894020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.894049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.894209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.894237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.894438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.894465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.894663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.894691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.894920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.894946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.895141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.895169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.895336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.895362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.895583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.895611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.895839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.895866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.896055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.896082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.896297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.896322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.896506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.896534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.896727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.896753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.896950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.896976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.897162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.897187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.897412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.897467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.897694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.897722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.897930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.897957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.898134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.898160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.898350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.898377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.898581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.898609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.898815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.898843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.899021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.899048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.428 [2024-07-16 00:28:46.899217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.428 [2024-07-16 00:28:46.899280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.428 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.899473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.899501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.899682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.899710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.899925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.899951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.900211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.900262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.900480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.900508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.900703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.900731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.900927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.900953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.901166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.901219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.901417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.901446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.901636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.901664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.901856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.901886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.902061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.902086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.902284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.902313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.902505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.902533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.902733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.902759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.902952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.902981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.903143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.903171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.903367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.903396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.903598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.903623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.903857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.903901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.904085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.904112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.904327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.904355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.904553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.904578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.904795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.904823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.905029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.905058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.905245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.905278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.905468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.905494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.905650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.905676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.905825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.905850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.906034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.906060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.906233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.906258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.906559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.906620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.906811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.906839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.907010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.907036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.907209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.907234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.907476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.907529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.907725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.907753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.907952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.907981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.908154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.908180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.908359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.908385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.429 [2024-07-16 00:28:46.908555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.429 [2024-07-16 00:28:46.908583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.429 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.908785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.908811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.908984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.909011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.909210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.909238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.909456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.909481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.909679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.909707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.909885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.909911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.910093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.910118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.910293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.910319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.910493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.910522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.910718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.910743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.910932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.910959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.911171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.911214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.911393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.911422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.911586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.911612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.911791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.911816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.912027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.912069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.912234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.912262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.912487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.912512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.912726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.912753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.912964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.913009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.913207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.913236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.913433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.913459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.913638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.913663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.913864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.913898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.914093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.914121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.914312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.914342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.914619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.914670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.914841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.914869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.915075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.915103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.915295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.915321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.915578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.915629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.915847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.915882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.916080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.916108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.916295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.916320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.916516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.916544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.430 qpair failed and we were unable to recover it. 00:24:55.430 [2024-07-16 00:28:46.916863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.430 [2024-07-16 00:28:46.916928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.917099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.917127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.917350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.917375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.917590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.917616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.917815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.917844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.918022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.918048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.918225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.918250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.918400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.918426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.918604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.918629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.918793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.918822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.919029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.919055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.919209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.919235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.919405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.919431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.919621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.919649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.919847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.919872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.920057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.920087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.920312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.920338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.920527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.920559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.920730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.920755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.920929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.920971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.921145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.921173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.921391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.921417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.921592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.921817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.921846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.922025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.922054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.922280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.922308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.922472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.922497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.922731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.922783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.922956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.922986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.923191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.923217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.923392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.923417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.923591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.923619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.923845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.923874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.924096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.924124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.924342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.924368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.924560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.924586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.924729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.924754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.924928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.924954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.925108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.925133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.925329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.925356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.925517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.925544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.431 qpair failed and we were unable to recover it. 00:24:55.431 [2024-07-16 00:28:46.925746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.431 [2024-07-16 00:28:46.925774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.925946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.925972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.926199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.926249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.926456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.926484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.926689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.926717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.926888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.926913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.927148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.927176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.927337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.927366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.927583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.927611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.927809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.927834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.928060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.928090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.928282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.928311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.928506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.928535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.928702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.928728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.928929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.928959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.929115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.929144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.929338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.929367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.929594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.929623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.929796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.929824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.930029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.930055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.930227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.930252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.930402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.930428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.930625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.930653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.930814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.930842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.931043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.931069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.931277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.931303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.931495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.931523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.931710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.931738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.931977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.932003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.932180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.932205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.932496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.932547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.932766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.932794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.932988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.933017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.933216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.933243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.933510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.933559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.933753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.933781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.933967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.933996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.934219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.934245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.934424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.934452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.934645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.934673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.934867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.934908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.432 qpair failed and we were unable to recover it. 00:24:55.432 [2024-07-16 00:28:46.935108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.432 [2024-07-16 00:28:46.935133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.935290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.935315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.935514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.935539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.935734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.935767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.935960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.935986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.936186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.936213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.936423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.936451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.936676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.936701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.936870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.936904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.937137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.937164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.937401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.937429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.937590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.937618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.937817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.937842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.938052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.938082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.938257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.938286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.938487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.938686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.938711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.938917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.938944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.939166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.939194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.939409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.939438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.939645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.939671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.939827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.939856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.940071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.940116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.940359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.940389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.940569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.940596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.940822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.940851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.941081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.941111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.941324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.941350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.941532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.941558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.941737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.941766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.942000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.942043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.942214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.942243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.942463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.942489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.942689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.942719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.942888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.942918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.943113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.943142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.943314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.943340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.943558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.943587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.943750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.943779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.943972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.944002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.944179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.944204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.433 qpair failed and we were unable to recover it. 00:24:55.433 [2024-07-16 00:28:46.944395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.433 [2024-07-16 00:28:46.944424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.944596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.944625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.944790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.944819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.945054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.945081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.945279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.945305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.945518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.945547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.945776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.945802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.945976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.946002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.946203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.946264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.946484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.946513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.946721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.946749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.946976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.947003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.947205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.947266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.947489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.947518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.947717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.947746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.947951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.947979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.948212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.948240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.948469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.948498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.948660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.948691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.948917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.948944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.949172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.949201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.949411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.949439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.949628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.949656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.949870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.949903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.950082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.950112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.950309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.950338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.950559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.950588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.950784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.950810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.950985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.951018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.951264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.951290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.951501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.951544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.951706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.951732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.951914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.951940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.952120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.952146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.952299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.952326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.952468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.434 [2024-07-16 00:28:46.952495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.434 qpair failed and we were unable to recover it. 00:24:55.434 [2024-07-16 00:28:46.952723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.952752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.952912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.952941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.953138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.953166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.953387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.953413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.953661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.953716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.953892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.953923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.954140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.954169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.954372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.954400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.954599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.954650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.954891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.954920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.955146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.955172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.955345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.955371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.955635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.955685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.955889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.955916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.956142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.956170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.956371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.956397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.956664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.956714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.956914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.956945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.957150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.957177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.957352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.957379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.957580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.957614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.957811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.957839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.958009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.958038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.958233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.958260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.958461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.958490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.958682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.958711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.958902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.958932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.959163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.959189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.959428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.959478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.959699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.959728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.959929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.959956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.960138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.960166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.960371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.960401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.960620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.960649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.960811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.960841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.961045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.961072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.961248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.961277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.961483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.961512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.961670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.961700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.961901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.961927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.435 [2024-07-16 00:28:46.962079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.435 [2024-07-16 00:28:46.962105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.435 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.962282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.962308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.962516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.962545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.962770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.962796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.962998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.963027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.963259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.963287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.963515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.963541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.963735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.963761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.964014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.964071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.964293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.964322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.964530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.964557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.964729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.964756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.964929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.964959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.965157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.965186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.965382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.965411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.965605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.965631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.965782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.965808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.965989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.966016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.966213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.966242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.966444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.966471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.966669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.966702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.966901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.966931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.967159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.967185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.967362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.967388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.967590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.967618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.967836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.967865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.968081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.968110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.968330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.968356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.968554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.968583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.968749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.968777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.968944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.968984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.969159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.969184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.969349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.969379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.969573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.969602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.969800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.969826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.969973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.969999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.970163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.970192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.970384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.970413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.970643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.970671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.970841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.970868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.971080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.971121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.436 qpair failed and we were unable to recover it. 00:24:55.436 [2024-07-16 00:28:46.971324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.436 [2024-07-16 00:28:46.971350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.971522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.971548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.971724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.971750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.971903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.971930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.972137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.972166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.972357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.972385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.972589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.972615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.972810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.972839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.973054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.973080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.973252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.973278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.973454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.973481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.973711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.973737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.973938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.973968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.974190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.974219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.974390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.974417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.974719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.974789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.975011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.975041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.975214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.975243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.975419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.975445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.975623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.975659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.975892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.975922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.976115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.976145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.976347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.976373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.976604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.976631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.976784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.976811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.977011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.977041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.977272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.977298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.977518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.977562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.977747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.977776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.977972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.978003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.978175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.978201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.978394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.978424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.978635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.978661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.978862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.978898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.979099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.979125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.979291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.979319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.979524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.979553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.979776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.979803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.980006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.980033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.980200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.980230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.980461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.437 [2024-07-16 00:28:46.980490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.437 qpair failed and we were unable to recover it. 00:24:55.437 [2024-07-16 00:28:46.980657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.980687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.980856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.980891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.981075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.981104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.981269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.981299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.981527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.981553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.981703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.981730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.981967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.981997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.982202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.982231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.982403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.982432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.982630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.982656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.982860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.982896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.983064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.983094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.983292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.983321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.983492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.983519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.983678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.983704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.983858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.983891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.984110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.984139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.984340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.984366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.984569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.984603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.984809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.984838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.985075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.985104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.985278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.985304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.985503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.985532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.985757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.985786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.985956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.985982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.986161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.986188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.986417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.986447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.986612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.986642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.986872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.986909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.987096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.987123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.987337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.987366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.987573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.987599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.987839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.987868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.988091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.988118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.988346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.988396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.988594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.988623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.988822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.438 [2024-07-16 00:28:46.988851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.438 qpair failed and we were unable to recover it. 00:24:55.438 [2024-07-16 00:28:46.989058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.989085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.989260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.989290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.989510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.989539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.989708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.989737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.989915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.989944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.990184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.990213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.990451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.990477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.990679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.990705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.990926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.990953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.991215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.991277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.991475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.991504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.991715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.991744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.991971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.991997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.992236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.992262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.992446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.992473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.992685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.992715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.992915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.992952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.993126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.993171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.993366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.993395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.993604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.993633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.993862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.993899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.994098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.994132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.994325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.994353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.994542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.994571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.994744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.994770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.994960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.994990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.995188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.995217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.995439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.995468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.995663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.995689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.995893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.995922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.996156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.996182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.439 [2024-07-16 00:28:46.996378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.439 [2024-07-16 00:28:46.996404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.439 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.996581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.996607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.996809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.996835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.997151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.997180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.997385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.997412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.997622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.997647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.997854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.997889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.998101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.998130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.998320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.998350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.998527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.998553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.998740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.998766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.998960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.998990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.999213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.999242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.999412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.999438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.999676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.999727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:46.999948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:46.999978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.000180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.000209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.000405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.000431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.000657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.000687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.000890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.000920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.001146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.001175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.001377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.001403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.001715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.001769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.001969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.001998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.002166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.002195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.002422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.002448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.002732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.002782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.002980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.003009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.003202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.003231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.003433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.003459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.003658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.003692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.003918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.721 [2024-07-16 00:28:47.003948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.721 qpair failed and we were unable to recover it. 00:24:55.721 [2024-07-16 00:28:47.004173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.004202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.004439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.004465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.004670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.004699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.004921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.004951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.005157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.005186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.005381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.005407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.005571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.005600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.005783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.005812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.006007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.006037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.006204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.006230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.006429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.006457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.006647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.006676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.006837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.006867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.007070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.007097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.007302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.007331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.007499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.007528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.007720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.007749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.007980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.008007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.008237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.008263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.008410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.008437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.008640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.008681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.008889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.008916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.009146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.009175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.009379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.009409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.009626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.009655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.009862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.009896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.010096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.010127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.010322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.010351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.010627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.010656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.010890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.010917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.011094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.011123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.011308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.011337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.011564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.011592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.011764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.011792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.012084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.012142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.012341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.012370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.012560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.012590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.012766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.012792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.012970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.722 [2024-07-16 00:28:47.013001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.722 qpair failed and we were unable to recover it. 00:24:55.722 [2024-07-16 00:28:47.013206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.013236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.013425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.013454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.013652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.013678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.013887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.013917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.014090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.014118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.014350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.014377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.014543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.014569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.014735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.014764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.014991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.015018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.015290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.015319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.015537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.015563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.015777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.015805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.016034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.016061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.016272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.016314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.016508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.016534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.016732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.016761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.016942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.016971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.017167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.017196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.017398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.017424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.017599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.017626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.017777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.017803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.018026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.018056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.018252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.018278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.018571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.018631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.018800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.018829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.019056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.019086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.019264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.019291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.019470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.019496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.019687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.019714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.019944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.019974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.020207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.020233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.020479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.020530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.020733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.020762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.020982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.021013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.021209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.021235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.021530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.021591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.021808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.021838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.022042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.022071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.022265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.022291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.723 qpair failed and we were unable to recover it. 00:24:55.723 [2024-07-16 00:28:47.022613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.723 [2024-07-16 00:28:47.022673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.022869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.022905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.023100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.023128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.023351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.023377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.023542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.023572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.023764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.023793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.024012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.024042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.024244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.024270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.024471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.024501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.024698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.024727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.024933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.024960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.025146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.025172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.025387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.025417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.025583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.025611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.025834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.025863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.026052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.026079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.026296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.026346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.026564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.026593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.026790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.026816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.026970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.027004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.027202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.027228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.027431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.027460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.027629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.027658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.027856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.027900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.028115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.028145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.028366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.028392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.028569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.028598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.028833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.028859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.029034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.029064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.029256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.029286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.029443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.029472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.029697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.029723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.029934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.029963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.030184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.030213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.030435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.030464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.030693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.030719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.030887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.030917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.031114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.031143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.031349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.031378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.031600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.724 [2024-07-16 00:28:47.031626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.724 qpair failed and we were unable to recover it. 00:24:55.724 [2024-07-16 00:28:47.031795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.031830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.032069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.032099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.032297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.032326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.032523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.032549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.032776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.032805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.033026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.033056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.033250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.033279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.033501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.033527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.033808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.033864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.034098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.034127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.034359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.034388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.034615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.034641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.034817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.034847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.035056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.035085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.035309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.035339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.035539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.035565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.035767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.035793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.035974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.036005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.036224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.036253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.036478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.036504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.036684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.036713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.036906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.036936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.037153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.037182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.037383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.037409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.037614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.037671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.037845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.037874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.038102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.038340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.038366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.038651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.038710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.038928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.038958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.039188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.039217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.039454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.725 [2024-07-16 00:28:47.039480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.725 qpair failed and we were unable to recover it. 00:24:55.725 [2024-07-16 00:28:47.039710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.039739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.039904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.039934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.040096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.040126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.040348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.040374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.040554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.040580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.040755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.040781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.040959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.040986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.041210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.041236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.041466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.041500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.041691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.041720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.041931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.041960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.042191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.042218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.042533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.042594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.042791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.042819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.043014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.043044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.043239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.043265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.043461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.043490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.043655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.043684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.043857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.043897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.044121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.044147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.044346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.044375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.044541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.044570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.044801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.044831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.045040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.045067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.045249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.045278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.045470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.045499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.045668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.045699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.045924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.045952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.046192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.046242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.046465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.046494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.046685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.046716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.046937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.046964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.047292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.047343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.047506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-16 00:28:47.047536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.726 qpair failed and we were unable to recover it. 00:24:55.726 [2024-07-16 00:28:47.047766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.047795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.047976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.048003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.048318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.048377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.048568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.048597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.048816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.048846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.049044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.049071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.049349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.049401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.049623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.049652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.049858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.049896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.050092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.050118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.050438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.050497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.050703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.050732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.050932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.050959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.051111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.051137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.051333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.051367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.051562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.051592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.051801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.051827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.052038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.052065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.052269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.052298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.052528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.052557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.052748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.052777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.052981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.053008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.053228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.053258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.053419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.053614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.053644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.053846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.053872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.054097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.054126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.054320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.054350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.054518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.054548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.054751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.054777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.054953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.054980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.055135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.055161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.055361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.055391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.055565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.055592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.055787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.055816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.056035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.056065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.056257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.056286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.056478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.056504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.056709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.056738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 [2024-07-16 00:28:47.056903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.727 [2024-07-16 00:28:47.056934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.727 qpair failed and we were unable to recover it. 00:24:55.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2390271 Killed "${NVMF_APP[@]}" "$@" 00:24:55.727 [2024-07-16 00:28:47.057133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.057163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.057370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.057397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:55.728 [2024-07-16 00:28:47.057569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.057598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.728 [2024-07-16 00:28:47.057781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.057814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.728 [2024-07-16 00:28:47.058033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.058063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.728 [2024-07-16 00:28:47.058243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.058270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.058445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.058474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.058679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.058709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.058894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.058924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.059126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.059152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.059351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.059381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.059547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.059576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.059803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.059832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.060049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.060076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.060258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.060287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.060502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.060531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.060731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.060761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.060959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.060987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.061190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.061220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.061433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.061462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.061654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.061683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.061885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.061914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.062105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.062131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.062346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.062374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.062537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.062566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.062779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.062813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.062999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.063026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2390825 00:24:55.728 [2024-07-16 00:28:47.063225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2390825 00:24:55.728 [2024-07-16 00:28:47.063256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.063445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.063476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2390825 ']' 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.728 [2024-07-16 00:28:47.063681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.063709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.728 [2024-07-16 00:28:47.063883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.063929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.728 [2024-07-16 00:28:47.064103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.064131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 00:28:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.728 [2024-07-16 00:28:47.064365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.064393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.064569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.064595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.728 [2024-07-16 00:28:47.064810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.728 [2024-07-16 00:28:47.064844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.728 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.065046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.065076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.065266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.065295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.065496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.065523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.065755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.065785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.065960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.065990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.066179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.066207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.066431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.066458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.066682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.066711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.066904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.066933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.067098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.067127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.067346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.067373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.067615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.067646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.067850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.067893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.068083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.068111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.068355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.068382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.068614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.068644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.068840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.068871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.069078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.069107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.069295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.069321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.069519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.069550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.069748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.069778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.069955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.069985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.070164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.070198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.070383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.070413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.070610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.070642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.070848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.070887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.071104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.071139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.071409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.071460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.071648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.071679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.071903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.071931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.072110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.072137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.072423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.072483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.072686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.072715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.072917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.072947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.073103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.073129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.073356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.073387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.073577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.073609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.073813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.073844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.074037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.074067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.074246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.074277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.729 [2024-07-16 00:28:47.074476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.729 [2024-07-16 00:28:47.074504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.729 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.074715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.074745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.074943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.074972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.075123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.075150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.075343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.075378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.075574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.075604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.075806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.075834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.076000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.076034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.076241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.076271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.076472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.076505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.076691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.076719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.076931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.076962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.077188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.077215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.077448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.077482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.077719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.077745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.077937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.077968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.078156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.078186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.078380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.078407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.078617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.078650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.078844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.078875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.079091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.079122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.079311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.079341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.079524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.079557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.079787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.079824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.080046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.080077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.080244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.080280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.080494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.080522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.080676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.080704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.080909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.080940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.081139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.081174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.081375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.081403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.081599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.081629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.081851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.081892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.082119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.082149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.082337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.082364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.730 [2024-07-16 00:28:47.082562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.730 [2024-07-16 00:28:47.082600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.730 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.082803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.082833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.083064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.083092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.083268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.083295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.083499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.083540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.083747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.083778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.083952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.083982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.084210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.084238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.084468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.084528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.084726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.084756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.084925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.084956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.085136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.085164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.085354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.085381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.085613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.085643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.085803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.085833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.086052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.086079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.086282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.086319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.086496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.086526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.086694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.086725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.086964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.086992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.087175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.087205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.087396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.087426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.087635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.087665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.087827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.087854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.088035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.088064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.088244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.088270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.088478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.088508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.088705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.088732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.088949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.088980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.089158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.089189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.089392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.089420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.089646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.089674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.089873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.089918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.090114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.090143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.090344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.090374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.090598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.090628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.090818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.090848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.091052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.091083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.091282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.091311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.731 [2024-07-16 00:28:47.091509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.731 [2024-07-16 00:28:47.091541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.731 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.091747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.091777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.091969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.092000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.092203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.092234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.092437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.092464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.092628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.092663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.092893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.092923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.093115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.093151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.093353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.093380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.093587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.093617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.093805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.093835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.094046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.094076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.094271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.094298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.094498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.094529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.094733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.094763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.094962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.094993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.095218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.095248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.095486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.095538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.095731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.095768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.096017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.096048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.096249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.096275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.096546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.096603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.096803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.096835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.097061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.097092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.097264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.097292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.097462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.097497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.097675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.097705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.097886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.097916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.098125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.098152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.098379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.098409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.098614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.098644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.098839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.098870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.099075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.099110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.099347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.099377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.099585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.099615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.099822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.099852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.100066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.100093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.100250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.100281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.100467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.100496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.100665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.100694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.100909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.100936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.732 qpair failed and we were unable to recover it. 00:24:55.732 [2024-07-16 00:28:47.101139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.732 [2024-07-16 00:28:47.101169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.101380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.101411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.101617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.101647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.101847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.101875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.102121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.102156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.102318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.102355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.102553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.102583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.102753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.102787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.102996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.103026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.103258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.103288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.103494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.103527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.103681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.103708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.103936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.103968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.104137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.104166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.104369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.104399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.104570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.104597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.104772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.104801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.105013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.105044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.105247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.105277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.105478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.105505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.105703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.105732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.105961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.105989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.106193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.106222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.106424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.106450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.106620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.106658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.106888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.106925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.107112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.107142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.107349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.107375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.107581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.107611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.107833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.107862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.108120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.108151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.108350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.108378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.108535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.108562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.108722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.108749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.108979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.109014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.109184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.109210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.109394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.109421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.109611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.109641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.109811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.109845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.110060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.110092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.733 qpair failed and we were unable to recover it. 00:24:55.733 [2024-07-16 00:28:47.110324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.733 [2024-07-16 00:28:47.110353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.110550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.110580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.110765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.110794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.111017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.111046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.111239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.111273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.111472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.111501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.111692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.111728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.111945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.111972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.112168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.112201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.112399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.112445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.112642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.112672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.112821] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:24:55.734 [2024-07-16 00:28:47.112869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.112909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 [2024-07-16 00:28:47.112918] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-typeqpair failed and we were unable to recover it. 00:24:55.734 =auto ] 00:24:55.734 [2024-07-16 00:28:47.113147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.113192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.113362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.113390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.113590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.113619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.113793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.113819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.114031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.114070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.114319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.114348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.114575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.114605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.114793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.114826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.115016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.115047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.115228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.115265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.115462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.115491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.115690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.115718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.115916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.115946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.116142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.116169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.116348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.116376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.116575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.116602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.116797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.116828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.117051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.117082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.117313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.117343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.117546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.117573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.117771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.117799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.118041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.734 [2024-07-16 00:28:47.118069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.734 qpair failed and we were unable to recover it. 00:24:55.734 [2024-07-16 00:28:47.118241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.118271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.118449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.118476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.118679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.118715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.118942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.118973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.119142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.119184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.119417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.119444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.119620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.119650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.119843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.119873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.120092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.120122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.120334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.120362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.120536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.120563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.120781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.120811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.121018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.121049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.121257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.121287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.121468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.121498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.121692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.121727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.121959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.121990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.122159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.122195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.122437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.122485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.122692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.122722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.122904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.122932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.123114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.123147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.123354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.123381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.123580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.123608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.123787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.123814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.123986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.124019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.124231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.124261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.124485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.124516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.124689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.124719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.124911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.124938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.125172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.125201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.125373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.125404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.125569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.125603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.125817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.125843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.126027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.126055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.126237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.126266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.126471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.126500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.126676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.126703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.126863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.126927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.127103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.735 [2024-07-16 00:28:47.127133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.735 qpair failed and we were unable to recover it. 00:24:55.735 [2024-07-16 00:28:47.127324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.127354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.127523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.127555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.127760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.127789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.128032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.128060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.128221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.128248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.128449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.128478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.128685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.128715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.128921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.128952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.129148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.129196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.129432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.129462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.129640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.129669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.129841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.129892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.130099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.130132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.130356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.130382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.130616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.130663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.130906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.130938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.131102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.131138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.131344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.131371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.131530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.131558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.131761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.131798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.131976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.132006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.132174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.132202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.132374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.132405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.132618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.132648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.132839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.132869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.133070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.133103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.133287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.133316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.133510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.133540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.133735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.133766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.133956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.133983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.134141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.134169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.134379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.134411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.134647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.134678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.134859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.134908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.135093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.135140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.135349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.135383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.135563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.135595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.135796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.135823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.136008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.136039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.136256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.136287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.736 [2024-07-16 00:28:47.136485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.736 [2024-07-16 00:28:47.136516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.736 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.136688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.136719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.136904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.136934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.137101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.137131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.137365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.137392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.137566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.137593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.137784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.137814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.138034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.138062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.138249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.138279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.138472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.138503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.138695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.138726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.138934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.138962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.139145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.139176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.139377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.139404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.139560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.139587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.139784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.139814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.140019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.140052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.140235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.140261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.140491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.140522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.140685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.140715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.140949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.140985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.141174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.141202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.141372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.141427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.141644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.141681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.141884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.141915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.142094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.142122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.142301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.142331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.142533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.142563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.142763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.142800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.142997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.143024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.143190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.143221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.143416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.143449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.143622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.143651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.143888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.143916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.144116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.144147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.144362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.144406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.144596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.144627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.144804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.144831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.144987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.145013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.145226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.145255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.737 [2024-07-16 00:28:47.145454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.737 [2024-07-16 00:28:47.145483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.737 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.145674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.145700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.145880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.145910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.146108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.146137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.146302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.146330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.146521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.146547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.146778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.146828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.147028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.147056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.147254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.147283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.147458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.147483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.147670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.147720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.147890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.147919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.148091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.148118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.148308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.148335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.148536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.148565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.148825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.148891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.149056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.149085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.738 [2024-07-16 00:28:47.149277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.149303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.149459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.149484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.149636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.149661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.149893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.149922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.150096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.150121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.150311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.150339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.150547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.150597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.150792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.150820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.150989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.151015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.151256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.151303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.151569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.151624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.151842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.151870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.152102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.152128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.152353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.152381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.152590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.152643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.152822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.152847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.153011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.153038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.153185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.153210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.153434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.153459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.153660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.153685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.153862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.738 [2024-07-16 00:28:47.153894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.738 qpair failed and we were unable to recover it. 00:24:55.738 [2024-07-16 00:28:47.154068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.154093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.154243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.154269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.154421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.154447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.154624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.154649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.154795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.154820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.155008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.155034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.155203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.155229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.155434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.155459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.155632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.155657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.155822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.155848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.156030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.156056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.156236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.156261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.156436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.156465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.156639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.156664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.156836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.156862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.157038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.157063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.157263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.157289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.157462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.157487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.157632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.157656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.157829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.157855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.158017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.158043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.158222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.158247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.158402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.158429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.158594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.158619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.158795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.158821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.159011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.159037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.159215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.159240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.159380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.159405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.159560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.159585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.159758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.159783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.159963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.159989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.160166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.160191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.160391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.160416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.160556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.160581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.160758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.160783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.160963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.160989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.161147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.161171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.161325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.161350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.161520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.161545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.161722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.161752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.161935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.161961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.162161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.162186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.162334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.162360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.162560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.162585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.162746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.739 [2024-07-16 00:28:47.162771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.739 qpair failed and we were unable to recover it. 00:24:55.739 [2024-07-16 00:28:47.162952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.162978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.163123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.163148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.163302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.163327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.163503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.163528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.163704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.163729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.163869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.163901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.164080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.164106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.164285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.164310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.164480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.164505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.164708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.164734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.164887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.164913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.165087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.165112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.165288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.165313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.165485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.165510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.165704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.165729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.165911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.165937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.166106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.166132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.166336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.166361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.166514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.166539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.166708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.166733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.166902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.166928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.167072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.167097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.167285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.167310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.167484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.167509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.167682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.167707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.167885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.167911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.168060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.168086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.168267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.168292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.168447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.168474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.168641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.168666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.168865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.168895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.169071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.169096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.169250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.169275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.169452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.169477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.169646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.169671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.169850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.169900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.170080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.170106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.170261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.170286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.170485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.170510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.170711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.170737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.170919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.170944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.171116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.171142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.171344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.171368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.171545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.171570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.740 [2024-07-16 00:28:47.171770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.740 [2024-07-16 00:28:47.171795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.740 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.172002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.172027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.172176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.172201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.172345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.172371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.172566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.172591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.172768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.172794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.172957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.172984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.173188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.173213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.173383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.173408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.173612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.173637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.173843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.173868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.174080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.174106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.174284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.174309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.174458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.174484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.174655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.174681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.174859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.174891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.175093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.175119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.175295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.175320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.175488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.175517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.175684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.175709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.175888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.175914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.176059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.176085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.176259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.176284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.176471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.176496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.176696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.176722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.176897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.176924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.177129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.177154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.177355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.177380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.177585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.177610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.177808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.177833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.178021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.178047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.178221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.178247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.178426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.178451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.178594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.178620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.178796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.178821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.179000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.179025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.179209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.179234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.179411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.179436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.179580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.179605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.179782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.179808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.179990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.180016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.180188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.180214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.180381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.180406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.180589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.180614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.180794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.180819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.181021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.181047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.741 [2024-07-16 00:28:47.181225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.741 [2024-07-16 00:28:47.181250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.741 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.181447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.181472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.181619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.181644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.181813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.181839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.182013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.182039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.182180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.182205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.182353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.182380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.182550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.182576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.182776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.182802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.182952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.182978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.183149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.183174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.183350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.183377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.183555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.183581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.183722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.183751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.183924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.183950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.184153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.184178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.184377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.184403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.184558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.184583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.184785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.184811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.184989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.185015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.185188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.185213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.185385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.185410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.185590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.185615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.185781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.185807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.185978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.186004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.186175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.186205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.186404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.186429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.186590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.186615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.186786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.186812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.186989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.187015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.187199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.187225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.187372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.187398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.187549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.187574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.187755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.187780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.187958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.187986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.188115] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.742 [2024-07-16 00:28:47.188138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.188163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.188315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.188340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.188517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.188542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.188712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.188737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.188920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.188945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.189123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.189154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.742 qpair failed and we were unable to recover it. 00:24:55.742 [2024-07-16 00:28:47.189313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.742 [2024-07-16 00:28:47.189338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.189512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.189537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.189790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.189815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.190017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.190043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.190198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.190225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.190427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.190453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.190619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.190645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.190794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.190820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.190971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.190997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.191147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.191173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.191372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.191397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.191545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.191570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.191746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.191771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.191956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.191982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.192137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.192163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.192339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.192364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.192563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.192588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.192727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.192752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.192889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.192915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.193059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.193084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.193255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.193280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.193533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.193559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.193736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.193762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.193965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.193991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.194178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.194203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.194379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.194404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.194578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.194609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.194810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.194836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.195001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.195028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.195228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.195254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.195430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.195456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.195627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.195653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.195827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.195852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.196044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.196070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.196214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.196240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.196446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.196472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.196753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.196779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.196955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.196981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.197154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.197183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.197359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.197385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.197590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.197615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.197767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.197792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.197947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.197973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.198149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.198174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.198361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.198386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.198561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.198587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.198789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.198815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.198996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.743 [2024-07-16 00:28:47.199022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.743 qpair failed and we were unable to recover it. 00:24:55.743 [2024-07-16 00:28:47.199203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.199229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.199375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.199401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.199578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.199603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.199776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.199802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.199994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.200020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.200188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.200214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.200360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.200386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.200553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.200578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.200764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.200789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.200966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.200991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.201197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.201222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.201392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.201417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.201596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.201622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.201793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.201819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.201974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.202001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.202142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.202168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.202425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.202451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.202658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.202684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.202841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.202866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.203076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.203105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.203258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.203283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.203484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.203510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.203658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.203685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.203832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.203858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.204077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.204107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.204247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.204273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.204449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.204475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.204682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.204708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.204869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.204900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.205076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.205102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.205250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.205276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.205459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.205485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.205656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.205682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.205864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.205897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.206044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.206069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.206254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.206280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.206449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.206474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.206677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.206702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.206849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.206894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.207074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.207100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.207255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.207281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.207434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.207460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.207631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.207657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.207837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.207873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.208058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.208084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.744 [2024-07-16 00:28:47.208237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.744 [2024-07-16 00:28:47.208263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.744 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.208437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.208468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.208645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.208671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.208844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.208870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.209044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.209071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.209242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.209268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.209462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.209488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.209658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.209684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.209837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.210050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.210076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.210294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.210320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.210500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.210526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.210675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.210701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.210913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.210941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.211093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.211121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.211333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.211359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.211538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.211564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.211745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.211771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.211939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.211965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.212171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.212197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.212374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.212400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.212576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.212602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.212770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.212796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.212947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.212974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.213122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.213148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.213336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.213363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.213538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.213564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.213734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.213760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.213978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.214005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.214149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.214176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.214374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.214400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.214572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.214598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.214770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.214795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.214937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.214964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.215137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.215162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.215308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.215334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.215538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.215563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.215732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.215758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.215933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.215959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.216134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.216160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.216337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.216363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.216536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.216562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.216766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.216796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.217004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.217031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.217212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.217238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.217414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.217440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.217638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.217664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.217799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.217825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.218003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.218029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.218198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.218224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.218408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.745 [2024-07-16 00:28:47.218439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.745 qpair failed and we were unable to recover it. 00:24:55.745 [2024-07-16 00:28:47.218621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.218647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.218816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.219011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.219038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.219252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.219278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.219446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.219472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.219677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.219704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.219912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.219939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.220115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.220141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.220347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.220373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.220577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.220602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.220813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.220839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.221037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.221063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.221243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.221269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.221450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.221476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.221676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.221702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.221914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.221941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.222096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.222122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.222276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.222302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.222481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.222511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.222694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.222720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.222861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.222906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.223088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.223114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.223314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.223340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.223514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.223539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.223685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.223711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.223892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.223919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.224071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.224097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.224280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.224307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.224490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.224516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.224679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.224704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.224881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.224907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.225080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.225107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.225272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.225298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.225453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.225479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.225655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.225681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.225833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.225859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.226012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.226038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.226217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.226254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.226426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.226452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.226626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.226652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.226824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.226850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.227005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.227031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.227208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.227234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.227435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.227461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.227661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.746 [2024-07-16 00:28:47.227687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.746 qpair failed and we were unable to recover it. 00:24:55.746 [2024-07-16 00:28:47.227859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.227899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.228079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.228106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.228257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.228283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.228431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.228458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.228627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.228652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.228827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.228854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.229029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.229056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.229198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.229224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.229399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.229425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.229561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.229587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.229787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.229813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.229989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.230017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.230193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.230219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.230395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.230422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.230628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.230661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.230805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.230831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.231009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.231036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.231236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.231262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.231436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.231463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.231612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.231638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.231845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.231884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.232063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.232088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.232245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.232270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.232446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.232472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.232654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.232680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.232848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.232874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.233058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.233084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.233262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.233288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.233467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.233493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.233689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.233715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.233917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.233943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.234147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.234173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.234332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.234358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.234562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.234588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.234756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.234782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.234958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.234984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.235139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.235165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.235343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.235369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.235549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.235575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.235749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.235775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.235947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.235974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.236144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.236181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.236361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.236387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.236525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.236551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.236730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.236756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.236896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.236923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.237104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.237131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.237183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeda3f0 (9): Bad file descriptor 00:24:55.747 [2024-07-16 00:28:47.237447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.237488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.237650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.237678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.237891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.237919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.238097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.747 [2024-07-16 00:28:47.238123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.747 qpair failed and we were unable to recover it. 00:24:55.747 [2024-07-16 00:28:47.238304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.238331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.238493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.238521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.238699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.238726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.238906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.238940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.239123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.239149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.239294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.239320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.239470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.239496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.239650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.239676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.239889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.239916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.240064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.240090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.240298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.240324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.240494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.240520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.240662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.240688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.240847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.240873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.241052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.241078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.241255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.241281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.241478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.241504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.241703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.241729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.241900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.241926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.242125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.242151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.242358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.242384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.242533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.242559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.242718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.242744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.242942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.242982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.243167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.243196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.243345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.243373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.243573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.243600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.243802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.243828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.244026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.244053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.244232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.244259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.244468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.244495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.244707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.244734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.244907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.244934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.245112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.245139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.245309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.245336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.245488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.245515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.245720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.245747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.245897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.245924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.246097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.246123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.246316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.246342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.246517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.246544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.246746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.246773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.246926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.246953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.247122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.247148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.247373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.247400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.247558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.247585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.247730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.247757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.248052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.248079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.248288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.248314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.748 [2024-07-16 00:28:47.248490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.748 [2024-07-16 00:28:47.248516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.748 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.248677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.248703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.248872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.248904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.249044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.249070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.249219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.249246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.249443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.249469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.249641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.249667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.249844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.249870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.250049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.250080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.250252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.250278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.250479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.250505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.250705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.250731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.250885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.250912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.251094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.251120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.251302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.251328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.251508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.251535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.251685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.251713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.251893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.251920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.252069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.252095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.252269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.252294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.252502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.252528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.252711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.252738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.252966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.252992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.253201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.253228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.253367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.253393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.253568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.253594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.253770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.253797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.253958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.253985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.254190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.254216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.254395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.254421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.254621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.254647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.254828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.254855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.255052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.255079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.255282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.255308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.255484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.255511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.255809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.255836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.256042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.256068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.256249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.256275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.256467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.256492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.256662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.256688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.256888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.256915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.257090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.257123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.257311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.257337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.257516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.257542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.257746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.257772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.258052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.258078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.749 [2024-07-16 00:28:47.258314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.749 [2024-07-16 00:28:47.258341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.749 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.258527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.258554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.258730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.258765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.258922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.258949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.259106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.259148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.259323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.259348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.259527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.259555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.259758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.259798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.260026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.260055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.260247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.260274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.260547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.260573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.260746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.260773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.260974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.261002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.261190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.261216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.261402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.261429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.261599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.261625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.261801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.261827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.261983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.262010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.262184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.262209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.262379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.262405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.262606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.262633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.262814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.262840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.263021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.263048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.263254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.263279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.263481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.263507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.263714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.263741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.263887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.263913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.264110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.264135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.264343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.264368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.264527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.264554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.264708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.264734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.264911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.264939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.265114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.265141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.265319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.265346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.265494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.265520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.265693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.265719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.265903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.265930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.266104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.266130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.266347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.266374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.266552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.266578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.266727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.266753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.266954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.266981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.267131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.267163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.267340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.267368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.267568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.267593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.267765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.267791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.267972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.267999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.268166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.268192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.268361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.268387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.268530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.268556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.268708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-07-16 00:28:47.268734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.750 qpair failed and we were unable to recover it. 00:24:55.750 [2024-07-16 00:28:47.268910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.268938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.269117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.269143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.269320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.269346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.269520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.269546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.269720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.269747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.269906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.269933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.270115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.270142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.270354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.270380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.270579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.270605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.270777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.270803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.271009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.271036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.271218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.271244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.271400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.271426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.271600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.271625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.271779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.271806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.271982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.272009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.272188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.272214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.272413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.272440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.272626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.272653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.272799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.272826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.272981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.273008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.273158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.273185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.273361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.273387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.273559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.273585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.273787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.273813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.273993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.274020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.274168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.274195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.274357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.274384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.274560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.274587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.274784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.274810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.274959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.274985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.275131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.275162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.275370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.275397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.275539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.275565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.275721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.275747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.275927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.275955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.276100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.276126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.276514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.276543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.276745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.276771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.276979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.277006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.277205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.277230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.277408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.277435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.277639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.277665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.277843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.277869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.278074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.278100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.278318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.278344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.278497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.278524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.278708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.278734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.278891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.278918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.279067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.279093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.279263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.279290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.279466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.279493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.279670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.279697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.279874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.279905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.751 qpair failed and we were unable to recover it. 00:24:55.751 [2024-07-16 00:28:47.280112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-07-16 00:28:47.280138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.280310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.280337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.280486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.280512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.280686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.280712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.280891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.280939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.281147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.281173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.281348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.281373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.281547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.281573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.281746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.281772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.281952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.281978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.282153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.282186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.282352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.282378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.282532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.282558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.282731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.282757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.282905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.282932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.283111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.283138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.283313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.283340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.283507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.283538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.283707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.283733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.283886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.283912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.284094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.284120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.284300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.284326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.284504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.284530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.284708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.284734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.284904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.284930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.285086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.285112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.285258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.285284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.285466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.285492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.285672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.285698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.285871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.285903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.286074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.286101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.286259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.286285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.286465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.286492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.286670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.286696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.286872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.286905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.287050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.287077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.287246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.287272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.287444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.287470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.287643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.287669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.287869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.287910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.288120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.288147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.288293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.288320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.288523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.288549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.288756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.288782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.288955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.288981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.289179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.289206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.289406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.289434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.289636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.289662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.289807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.289833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.290010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.290037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.290215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.290241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.290438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.290464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.290670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.290696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.290848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.290880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.291084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.291110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.291286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-07-16 00:28:47.291312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.752 qpair failed and we were unable to recover it. 00:24:55.752 [2024-07-16 00:28:47.291514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.291540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.291742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.291772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.291975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.292001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.292174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.292200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.292349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.292376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.292576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.292602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.292802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.292828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.293000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.293027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.293233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.293259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.293427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.293454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.293602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.293628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.293809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.293835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.294016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.294043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.294246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.294272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.294479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.294505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.294710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.294737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.294914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.294941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.295090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.295117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.295316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.295342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.295515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.295541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.295712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.295738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.295917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.295943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.296114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.296140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.296293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.296320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.296468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.296494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.296669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.296695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.296847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.296873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.297053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.297255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.297281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.297482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.297509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.297686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.297712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.297913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.297940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.298086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.298112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.298326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.298353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.298519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.298545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.298723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.298750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.298934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.298960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.299114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.299141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.299311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.299337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.299485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.299511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.299698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.299724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.299891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.299926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.300131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.300157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.300331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.300356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.300555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.300581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.300749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.300774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.300970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.300996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.301166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.301192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.301366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.301392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.301563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.301589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.753 [2024-07-16 00:28:47.301754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.753 [2024-07-16 00:28:47.301780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.753 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.301962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.301988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.302131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.302162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.302312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.302339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.302518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.302544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.302726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.302752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.302927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.302954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.303132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.303157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.303292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.303318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.303489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.303515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.303696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.303721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.303893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.303919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.304122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.304148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.304315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.304340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.304513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.304539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.304742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.304767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.304910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.304936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.305086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.305112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.305299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.305325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.305499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.305525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.305703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.305729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.305927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.305953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.306109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.306135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.306306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.306332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.306534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.306560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.306736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.306763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.306927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.306953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.307128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.307153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.307339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.307364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.307539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.307565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.307741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.307767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.307924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.307962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.308130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.308156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.308313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.308340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.308557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.308583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.308787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.308812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.308983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.309010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.309165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.309190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.309338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.309364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.309507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.309533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.309680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.309706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.309903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.309929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.310075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.310101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.310239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.754 [2024-07-16 00:28:47.310265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.754 qpair failed and we were unable to recover it. 00:24:55.754 [2024-07-16 00:28:47.310414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.310440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.310596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.310622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.310763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.310789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.310817] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.755 [2024-07-16 00:28:47.310853] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.755 [2024-07-16 00:28:47.310874] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.755 [2024-07-16 00:28:47.310897] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.755 [2024-07-16 00:28:47.310907] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.755 [2024-07-16 00:28:47.310945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.310970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.310957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:55.755 [2024-07-16 00:28:47.311013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:55.755 [2024-07-16 00:28:47.311016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.755 [2024-07-16 00:28:47.311126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.310987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:55.755 [2024-07-16 00:28:47.311152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.311334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.311360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.311528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.311554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.311726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.311751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.311918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.755 [2024-07-16 00:28:47.311945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:55.755 qpair failed and we were unable to recover it. 00:24:55.755 [2024-07-16 00:28:47.312103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.312129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.312326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.312353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.312507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.312539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.312692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.312719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.312898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.312924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.313071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.313098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.313282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.313309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.313474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.313499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.313676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.313702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.313853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.313885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.314049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.314075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.314220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.314246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.314424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.314450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.314591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.314617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.314761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.314786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.314962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.038 [2024-07-16 00:28:47.314993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.038 qpair failed and we were unable to recover it. 00:24:56.038 [2024-07-16 00:28:47.315143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.315180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.315319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.315344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.315513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.315538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.315697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.315722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.315881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.315907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.316063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.316090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.316273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.316299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.316443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.316468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.316611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.316636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.316818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.316844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.317003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.317030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.317207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.317233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.317379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.317405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.317581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.317609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.317763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.317798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.317990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.318018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.318175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.318200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.318355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.318382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.318575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.318601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.318748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.318774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.318927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.318955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.319100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.319125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.319293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.319319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.319467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.319493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.319676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.319701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.319849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.319875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.320059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.320085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.320286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.320311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.320475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.320501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.320642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.320667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.320838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.320864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.321033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.321059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.321208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.321233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.321408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.321433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.321582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.321613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.321791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.321817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.322062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.322090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.322282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.322307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.322475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.322500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.322647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.322677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.039 qpair failed and we were unable to recover it. 00:24:56.039 [2024-07-16 00:28:47.322854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.039 [2024-07-16 00:28:47.322886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.323045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.323071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.323243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.323268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.323410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.323435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.323615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.323642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.323817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.323843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.324020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.324047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.324212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.324244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.324421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.324447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.324595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.324620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.324764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.324790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.324965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.324991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.325147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.325173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.325329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.325355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.325510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.325536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.325707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.325732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.325874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.325907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.326079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.326105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.326249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.326275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.326423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.326450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.326605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.326631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.326801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.326827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.326970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.326997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.327171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.327197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.327348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.327374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.327516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.327542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.327723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.327750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.327903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.327930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.328093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.328120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.328273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.328299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.328477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.328505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.328648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.328674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.328837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.328864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.329043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.329068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.329245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.329271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.329450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.329477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.329629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.329654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.329833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.329858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.330052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.330078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.330230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.330267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.330438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.330464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.040 qpair failed and we were unable to recover it. 00:24:56.040 [2024-07-16 00:28:47.330611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.040 [2024-07-16 00:28:47.330637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.330809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.330834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.330998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.331024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.331213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.331239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.331415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.331441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.331599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.331625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.331770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.331796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.331972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.331998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.332154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.332180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.332361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.332388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.332563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.332589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.332875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.332905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.333064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.333090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.333370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.333396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.333569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.333595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.333737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.333763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.333920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.333949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.334175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.334202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.334384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.334409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.334555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.334581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.334736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.334763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.334949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.334976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.335122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.335149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.335310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.335335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.335481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.335508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.335662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.335688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.335864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.335894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.336047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.336073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.336221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.336248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.336400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.336426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.336600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.336626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.336805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.336830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.337091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.337117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.337303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.337329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.337504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.337530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.337696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.337722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.337908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.337940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.338119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.338145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.338346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.338380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.338542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.338568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.338753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.041 [2024-07-16 00:28:47.338779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.041 qpair failed and we were unable to recover it. 00:24:56.041 [2024-07-16 00:28:47.338934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.338961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.339110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.339137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.339338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.339364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.339513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.339538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.339687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.339713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.339898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.339924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.340199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.340224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.340421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.340446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.340610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.340635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.340780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.340805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.340956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.340983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.341154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.341180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.341343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.341369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.341531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.341557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.341702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.341728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.341901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.341927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.342119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.342144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.342297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.342322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.342499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.342525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.342659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.342685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.342833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.342859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.343029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.343055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.343212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.343238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.343413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.343439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.343609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.343635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.343824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.343849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.344009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.344035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.344192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.344219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.344366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.344392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.344560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.344586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.344869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.344902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.345049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.345075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.345234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.042 [2024-07-16 00:28:47.345259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.042 qpair failed and we were unable to recover it. 00:24:56.042 [2024-07-16 00:28:47.345433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.345460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.345668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.345694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.345874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.345928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.346104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.346130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.346308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.346338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.346514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.346540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.346683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.346709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.346882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.346908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.347059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.347084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.347260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.347286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.347429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.347456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.347624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.347650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.347805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.347830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.348032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.348058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.348204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.348229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.348391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.348417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.348560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.348586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.348737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.348763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.348938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.348965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.349141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.349167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.349308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.349333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.349485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.349510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.349685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.349711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.349888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.349914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.350077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.350103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.350247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.350272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.350445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.350470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.350609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.350635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.350812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.350838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.351005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.351032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.351176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.351202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.351381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.351407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.351556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.351581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.351733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.351759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.351915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.351943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.352147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.352173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.352361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.352387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.352535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.352562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.352710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.352736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.352891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.352917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.043 [2024-07-16 00:28:47.353092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.043 [2024-07-16 00:28:47.353117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.043 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.353267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.353292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.353481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.353507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.353661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.353687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.353821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.353852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.354031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.354074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.354225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.354253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.354539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.354565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.354706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.354732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.354935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.354962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.355128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.355154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.355308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.355334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.355492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.355517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.355653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.355679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.355831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.355857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.356018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.356044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.356222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.356248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.356401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.356427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.356591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.356617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.356784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.356810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.357002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.357029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.357207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.357233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.357377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.357403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.357540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.357566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.357704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.357729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.357906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.357932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.358074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.358099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.358248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.358273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.358450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.358476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.358654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.358679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.358849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.358882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.359057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.359083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.359226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.359251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.359399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.359426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.359583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.359609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.359778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.359804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.359958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.359984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.360122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.360148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.360313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.360338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.360509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.360535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.360702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.360728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.360889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.044 [2024-07-16 00:28:47.360916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.044 qpair failed and we were unable to recover it. 00:24:56.044 [2024-07-16 00:28:47.361057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.361083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.361265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.361290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.361468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.361494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.361659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.361689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.361839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.361865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.362013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.362038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.362186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.362211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.362356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.362382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.362519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.362544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.362689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.362715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.362854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.362885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.363064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.363090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.363238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.363263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.363445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.363470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.363641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.363667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.363804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.363829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.363995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.364021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.364230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.364255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.364405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.364430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.364574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.364599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.364739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.364764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.364917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.364944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.365088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.365113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.365249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.365274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.365478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.365504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.365668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.365694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.365832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.365857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.366041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.366067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.366240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.366266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.366439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.366465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.366641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.366670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.366813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.366839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.367007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.367033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.367204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.367230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.367384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.367409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.367560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.367586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.367767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.367793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.367941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.367967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.368138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.368164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.368352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.368377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.368539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.368564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.045 qpair failed and we were unable to recover it. 00:24:56.045 [2024-07-16 00:28:47.368716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.045 [2024-07-16 00:28:47.368742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.368945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.368972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.369113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.369139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.369284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.369310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.369458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.369483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.369633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.369659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.369802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.369828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.369979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.370005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.370180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.370206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.370345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.370371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.370539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.370565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.370736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.370762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.370916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.370943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.371088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.371115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.371285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.371311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.371451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.371476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.371624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.371649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.371799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.371825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.371966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.371992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.372167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.372193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.372345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.372371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.372526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.372552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.372826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.372851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.373037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.373063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.373216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.373242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.373391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.373417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.373572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.373598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.373734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.373759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.373916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.373942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.374124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.374150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.374301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.374330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.374504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.374530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.374676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.374702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.374868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.374898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.375060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.375086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.375262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.375287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.375433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.375459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.375608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.375633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.375773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.375798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.375943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.375969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.376122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.376147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.046 [2024-07-16 00:28:47.376296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.046 [2024-07-16 00:28:47.376322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.046 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.376497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.376522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.376665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.376690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.376866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.376896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.377041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.377067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.377245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.377270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.377442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.377466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.377614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.377639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.377905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.377931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.378075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.378099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.378270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.378295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.378455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.378480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.378657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.378683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.378835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.378860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.379041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.379065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.379238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.379264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.379417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.379447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.379626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.379652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.379789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.379813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.379958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.379985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.380168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.380196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.380349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.380374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.380546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.380571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.380835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.380860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.381027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.381053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.381215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.381241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.381435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.381461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.381614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.381639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.381936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.381962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.382110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.382136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.382298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.382323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.382503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.382527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.382664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.382689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.382839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.382872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.383071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.047 [2024-07-16 00:28:47.383097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.047 qpair failed and we were unable to recover it. 00:24:56.047 [2024-07-16 00:28:47.383274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.383299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.383467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.383492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.383663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.383689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.383863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.383894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.384152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.384177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.384353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.384379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.384521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.384546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.384694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.384720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.384891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.384917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.385072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.385097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.385244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.385269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.385456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.385482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.385629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.385655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.385796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.385821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.386029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.386056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.386226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.386251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.386398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.386424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.386571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.386596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.386746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.386772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.386933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.386960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.387104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.387131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.387314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.387341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.387496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.387528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.387684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.387709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.387886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.387921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.388071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.388097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.388256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.388282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.388443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.388469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.388640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.388665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.388840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.388865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.389032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.389058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.389215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.389241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.389387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.389412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.389561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.389588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.389762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.389787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.389943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.389968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.390124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.390150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.390307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.390332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.390508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.390533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.390732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.390757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.390929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.390956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.048 [2024-07-16 00:28:47.391123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.048 [2024-07-16 00:28:47.391149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.048 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.391298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.391323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.391495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.391520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.391708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.391734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.391873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.391904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.392079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.392105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.392274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.392299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.392444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.392469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.392625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.392651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.392832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.392857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.393010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.393034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.393198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.393224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.393384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.393409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.393560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.393586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.393736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.393762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.393910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.393936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.394108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.394133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.394339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.394364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.394522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.394548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.394696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.394722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.394902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.394935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.395079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.395103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.395308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.395334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.395471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.395497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.395644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.395669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.395817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.395843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.395993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.396019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.396186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.396210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.396388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.396414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.396597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.396622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.396785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.396811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.396982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.397008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.397155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.397181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.397341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.397367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.397555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.397580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.397730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.397754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.397935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.397960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.398132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.398158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.398306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.398331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.398477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.398502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.398658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.398684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.398832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.049 [2024-07-16 00:28:47.398857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.049 qpair failed and we were unable to recover it. 00:24:56.049 [2024-07-16 00:28:47.399018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.399043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.399189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.399214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.399398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.399424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.399567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.399593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.399754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.399780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.399961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.399987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.400131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.400156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.400321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.400351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.400505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.400531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.400708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.400734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.400905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.400931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.401075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.401101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.401245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.401270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.401419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.401444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.401596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.401620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.401768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.401795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.401941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.401968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.402117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.402142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.402310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.402335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.402511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.402537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.402712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.402737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.402891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.402917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.403095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.403120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.403267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.403292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.403447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.403473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.403630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.403655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.403827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.403852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.404026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.404052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.404198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.404224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.404365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.404390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.404563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.404589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.404750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.404776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.404922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.404947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.405098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.405123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.405301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.405327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.405483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.405509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.405675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.405700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.405843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.405868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.406043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.406069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.406211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.406237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.406383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.406409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.050 qpair failed and we were unable to recover it. 00:24:56.050 [2024-07-16 00:28:47.406584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.050 [2024-07-16 00:28:47.406609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.406744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.406771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.406924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.406950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.407124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.407149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.407292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.407318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.407497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.407523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.407670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.407696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.407846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.407871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.408042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.408068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.408213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.408239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.408443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.408469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.408622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.408648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.408808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.408834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.409014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.409040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.409201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.409227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.409380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.409406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.409545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.409570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.409721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.409746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.409920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.409946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.410090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.410115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.410259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.410286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.410454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.410480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.410631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.410657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.410796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.410822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.410982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.411009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.411211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.411237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.411386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.411412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.411566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.411591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.411768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.411794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.411939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.411965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.412128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.412154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.412363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.412390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.412539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.412565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.412741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.412766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.412922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.051 [2024-07-16 00:28:47.412952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.051 qpair failed and we were unable to recover it. 00:24:56.051 [2024-07-16 00:28:47.413135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.413161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.413300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.413325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.413495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.413521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.413664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.413689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.413840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.413866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.414030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.414057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.414214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.414240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.414381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.414406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.414551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.414578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.414726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.414751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.414905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.414931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.415118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.415143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.415324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.415350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.415499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.415525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.415680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.415706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.415883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.415909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.416049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.416075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.416242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.416268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.416439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.416465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.416632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.416657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.416835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.416860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.417008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.417034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.417180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.417205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.417355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.417381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.417567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.417592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.417729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.417754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.417909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.417935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.418113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.418139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.418290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.418316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.418464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.418490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.418625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.418651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.418849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.418880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.419025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.419051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.419213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.419238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.419383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.419408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.419611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.419636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.419782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.419808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.419988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.420014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.420158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.420183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.420366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.420394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.420599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.420631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.052 qpair failed and we were unable to recover it. 00:24:56.052 [2024-07-16 00:28:47.420785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.052 [2024-07-16 00:28:47.420811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.420954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.420980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.421156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.421181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.421331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.421357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.421502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.421527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.421702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.421727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.421903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.421945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.422127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.422154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.422303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.422330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.422519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.422545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.422712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.422737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.422899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.422925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.423095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.423121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.423280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.423307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.423483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.423509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.423655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.423681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.423828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.423856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.424021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.424048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.424346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.424372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.424542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.424568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.424730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.424757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.424960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.424987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.425155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.425181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.425328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.425354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.425509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.425536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.425684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.425711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.425901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.425927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.426078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.426104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.426308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.426334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.426510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.426536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.426710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.426736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.426908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.426935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.427104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.427130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.427275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.427301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.427467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.427493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.427642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.427668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.427868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.427899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.428050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.428077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.428224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.428250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.428389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.428419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.428561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.428587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.053 qpair failed and we were unable to recover it. 00:24:56.053 [2024-07-16 00:28:47.428740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.053 [2024-07-16 00:28:47.428767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.428921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.428948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.429103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.429128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.429310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.429337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.429515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.429540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.429696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.429721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.429874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.429904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.430079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.430105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.430256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.430281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.430437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.430463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.430639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.430664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.430835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.430861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.431017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.431042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.431184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.431209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.431399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.431424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.431581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.431607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.431759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.431785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.431936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.431963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.432133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.432159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.432340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.432365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.432512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.432538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.432715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.432740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.432888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.432915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.433096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.433122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.433268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.433293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.433464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.433490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.433664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.433690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.433849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.433880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.434065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.434092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.434265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.434291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.434427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.434452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.434591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.434617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.434787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.434813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.434990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.435018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.435164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.435191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.435363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.435388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.435554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.435580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.435729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.435754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.435920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.435951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.436129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.436155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.436297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.436333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.054 qpair failed and we were unable to recover it. 00:24:56.054 [2024-07-16 00:28:47.436513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.054 [2024-07-16 00:28:47.436538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.436708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.436734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.436908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.436934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.437114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.437140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.437287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.437313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.437450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.437476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.437640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.437665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.437827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.437852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.438028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.438053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.438201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.438228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.438401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.438428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.438575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.438600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.438750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.438778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.438934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.438960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.439107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.439132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.439282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.439308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.439481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.439507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.439654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.439679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.439834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.439859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.440041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.440066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.440236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.440262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.440413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.440439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.440616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.440642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.440814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.440839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.441017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.441043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.441204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.441229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.441404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.441430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.441572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.441597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.441769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.441795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.442000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.442026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.442165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.442191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.442331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.442357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.442529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.442555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.442709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.442735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.442891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.442917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.055 [2024-07-16 00:28:47.443080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.055 [2024-07-16 00:28:47.443106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.055 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.443242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.443267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.443409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.443441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.443613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.443639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.443810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.443836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.443997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.444023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.444183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.444209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.444379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.444405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.444584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.444611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.444754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.444780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.444930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.444957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.445114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.445139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.445283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.445309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.445511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.445537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.445699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.445724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.445883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.445909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.446115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.446141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.446289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.446315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.446460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.446486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.446656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.446682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.446820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.446846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.447033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.447059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.447200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.447226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.447414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.447439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.447589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.447615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.447780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.447806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.447954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.447981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.448180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.448205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.448362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.448388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.448572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.448598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.448744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.448769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.448919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.448945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.449101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.449126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.449268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.449293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.449453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.449478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.449649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.449675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.449811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.449837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.450026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.450052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.450199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.450225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.450403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.450428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.450573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.450599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.056 [2024-07-16 00:28:47.450756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.056 [2024-07-16 00:28:47.450782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.056 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.450944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.450974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.451121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.451147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.451287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.451313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.451453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.451478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.451625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.451651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.451822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.451850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.452001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.452027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.452189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.452215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.452375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.452401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.452573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.452599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.452747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.452774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.452951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.452977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.453115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.453141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.453311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.453337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.453521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.453547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.453706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.453732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.453883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.453909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.454059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.454084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.454279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.454305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.454453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.454479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.454630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.454656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.454835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.454860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.455075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.455101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.455253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.455278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.455459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.455484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.455664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.455690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.455833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.455859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.456030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.456056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.456233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.456260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.456406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.456432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.456604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.456629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.456801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.456826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.456972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.456998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.457139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.457165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.457308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.457333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.457486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.457513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.457658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.457683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.457835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.457861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.458015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.458041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.458211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.458236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.057 [2024-07-16 00:28:47.458377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.057 [2024-07-16 00:28:47.458407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.057 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.458549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.458575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.458717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.458743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.458901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.458927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.459107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.459132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.459282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.459310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.459488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.459514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.459665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.459691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.459850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.459881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.460067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.460094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.460243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.460269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.460412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.460437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.460609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.460635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.460782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.460808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.460982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.461008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.461185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.461211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.461353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.461379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.461548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.461573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.461742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.461767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.461919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.461945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.462118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.462144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.462286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.462311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.462466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.462492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.462636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.462663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.462843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.462869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.463058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.463085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.463223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.463249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.463421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.463447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.463611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.463637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.463827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.463853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.464009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.464035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.464210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.464235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.464407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.464432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.464608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.464634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.464803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.464828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.465028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.465054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.465205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.465231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.465398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.465424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.465569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.465594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.465773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.465799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.465942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.465973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.058 qpair failed and we were unable to recover it. 00:24:56.058 [2024-07-16 00:28:47.466118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.058 [2024-07-16 00:28:47.466145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.466314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.466340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.466487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.466513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.466659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.466684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.466860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.466893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.467048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.467075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.467259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.467285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.467448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.467474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.467650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.467676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.467824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.467849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.468002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.468028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.468198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.468223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.468392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.468418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.468580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.468607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.468748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.468774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.468917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.468943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.469087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.469112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.469270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.469297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.469466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.469492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.469663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.469689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.469835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.469860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.470024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.470049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.470194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.470219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.470383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.470409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.470588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.470614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.470758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.470784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.470969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.470995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.471147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.471174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.471315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.471340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.471532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.471558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.471704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.471730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.471902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.471928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.472075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.472102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.472273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.472298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.472443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.472469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.472670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.472696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.472854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.472885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.473032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.473058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.473209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.473235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.059 [2024-07-16 00:28:47.473411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.059 [2024-07-16 00:28:47.473437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.059 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.473582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.473608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.473753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.473780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.473980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.474006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.474151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.474176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.474376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.474401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.474546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.474573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.474722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.474749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.474920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.474946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.475136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.475162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.475311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.475336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.475472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.475497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.475643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.475669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.475829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.475854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.476069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.476095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.476267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.476292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.476433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.476459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.476618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.476643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.476822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.476847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.477008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.477034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.477174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.477200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.477341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.477366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.477516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.477542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.477718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.477743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.477889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.477915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.478058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.478084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.478229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.478255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.478424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.478454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.478597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.478622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.478812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.478837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.478993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.479020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.479161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.479186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.479336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.479363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.479529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.479555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.479694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.479720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.479883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.479909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.480098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.480123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.480290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.060 [2024-07-16 00:28:47.480316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.060 qpair failed and we were unable to recover it. 00:24:56.060 [2024-07-16 00:28:47.480484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.480510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.480663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.480689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.480828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.480853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.481039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.481065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.481231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.481257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.481404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.481430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.481597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.481623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.481796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.481821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.481984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.482010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.482159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.482185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.482330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.482356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.482522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.482548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.482682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.482708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.482881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.482908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.483085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.483111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.483310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.483335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.483484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.483510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.483662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.483688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.483881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.483908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.484052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.484077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.484243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.484268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.484433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.484459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.484640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.484666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.484839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.484865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.485034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.485060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.485200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.485226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.485396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.485421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.485565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.485592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.485751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.485777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.485926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.485956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.486121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.486146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.486329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.486355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.486514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.486540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.486684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.486711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.486860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.486902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.487074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.487099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.487254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.487280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.487414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.487440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.487611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.487637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.487790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.487816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.061 [2024-07-16 00:28:47.487965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.061 [2024-07-16 00:28:47.487992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.061 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.488137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.488163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.488317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.488344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.488551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.488577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.488747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.488773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.488949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.488998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.489158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.489184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.489327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.489352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.489523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.489549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.489713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.489739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.489886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.489912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.490063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.490088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.490237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.490263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.490438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.490464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.490612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.490638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.490788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.490815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.490995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.491023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.491176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.491202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.491380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.491406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.491559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.491585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.491729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.491755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.491908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.491934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.492082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.492109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.492277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.492302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.492485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.492521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.492683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.492708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.492910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.492936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.493096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.493122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.493321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.493347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.493496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.493527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.493716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.493742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.493903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.493929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.494077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.494103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.494248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.494273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.494434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.494459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.494614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.494640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.494815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.494840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.495005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.495032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.495173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.495199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.495341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.495366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.495566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.495592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.062 [2024-07-16 00:28:47.495735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-16 00:28:47.495760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.062 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.495918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.495944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.496091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.496117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.496266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.496292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.496479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.496504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.496676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.496702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.496849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.496882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.497025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.497051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.497223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.497248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.497394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.497420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.497623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.497814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.497840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.497985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.498011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.498174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.498200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.498369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.498394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.498568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.498594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.498769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.498794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.498955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.498982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.499120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.499145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.499317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.499343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.499490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.499515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.499673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.499699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.499841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.499866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.500077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.500102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.500244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.500272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.500447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.500473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.500648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.500673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.500815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.500840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.501017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.501048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.501194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.501220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.501392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.501417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.501587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.501613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.501786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.501811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.501986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.502012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.502160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.502187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.502361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.502388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.502538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.502564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.502710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.502737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.502894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.502921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.503096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.503122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.503270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.503296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.063 [2024-07-16 00:28:47.503472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.063 [2024-07-16 00:28:47.503498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.063 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.503648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.503674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.503816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.503842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.503992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.504019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.504178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.504204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.504381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.504407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.504557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.504582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.504775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.504800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.504951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.504977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.505114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.505139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.505290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.505315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.505460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.505487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.505634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.505660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.505809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.505836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.505991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.506017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.506198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.506223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.506365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.506390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.506537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.506564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.506709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.506734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.506888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.506914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.507055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.507080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.507236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.507261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.507403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.507428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.507569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.507595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.507733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.507758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.507908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.507934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.508101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.508127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.508316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.508346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.508503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.508528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.508679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.508705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.508885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.508911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.509061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.509088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.509228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.509254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.509398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.509424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.509605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.509630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.064 [2024-07-16 00:28:47.509805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.064 [2024-07-16 00:28:47.509831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.064 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.509988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.510014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.510164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.510189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.510329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.510354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.510503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.510529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.510726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.510752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.510912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.510938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.511091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.511117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.511274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.511299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.511443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.511469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.511618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.511643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.511782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.511808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.511966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.511993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.512133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.512158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.512360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.512386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.512528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.512553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.512718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.512743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.512898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.512924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.513091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.513117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.513297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.513323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.513483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.513508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.513656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.513682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.513819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.513844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.514001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.514028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.514200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.514225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.514394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.514420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.514557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.514582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.514721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.514746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.514895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.514921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.515063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.515088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.515231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.515257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.515425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.515451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.515630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.515660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.515851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.515892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.516039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.516065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.516261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.516287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.516434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.516460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.516613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.516638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.516799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.516825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.516966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.516993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.517133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.517159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.065 [2024-07-16 00:28:47.517299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.065 [2024-07-16 00:28:47.517325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.065 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.517476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.517502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.517643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.517669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.517813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.517840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.518020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.518046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.518211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.518237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.518410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.518436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.518581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.518607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.518750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.518775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.518948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.518974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.519121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.519146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.519305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.519331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.519498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.519525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.519695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.519721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.519889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.519915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.520066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.520091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.520265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.520290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.520468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.520495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.520646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.520672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.520821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.520846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.520990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.521016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.521192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.521217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.521363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.521389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.521550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.521576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.521719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.521745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.521901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.521927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.522094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.522120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.522267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.522294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.522443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.522469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.522645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.522671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.522817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.522842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.523009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.523039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.523189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.523214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.523360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.523386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.523527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.523553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.523709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.523735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.523906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.523932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.524072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.524098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.524237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.524262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.524438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.524463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.524631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.524657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.066 qpair failed and we were unable to recover it. 00:24:56.066 [2024-07-16 00:28:47.524819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.066 [2024-07-16 00:28:47.524845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.524999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.525026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.525199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.525225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.525365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.525391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.525573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.525599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.525743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.525769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.525962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.525988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.526160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.526186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.526331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.526356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.526525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.526551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.526689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.526715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.526889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.526915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.527062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.527088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.527228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.527254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.527397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.527423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.527623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.527648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.527786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.527812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.527961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.527987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.528129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.528154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.528327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.528353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.528523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.528548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.528694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.528721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.528902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.528929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.529080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.529106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.529272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.529298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.529446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.529471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.529617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.529642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.529817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.529843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.530027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.530053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.530194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.530220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.530379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.530409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.530571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.530597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.530739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.530765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.530908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.530936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.531107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.531132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.531277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.531303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.531478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.531504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.531654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.531679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.531839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.531865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.532055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.532081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.532221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.532247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.067 qpair failed and we were unable to recover it. 00:24:56.067 [2024-07-16 00:28:47.532405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.067 [2024-07-16 00:28:47.532431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.532601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.532626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.532769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.532794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.532980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.533008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.533171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.533196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.533369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.533394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.533540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.533566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.533728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.533753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.533924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.533958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.534110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.534135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.534309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.534335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.534481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.534506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.534657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.534684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.534864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.534897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.535059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.535085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.535243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.535269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.535448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.535475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.535623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.535649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.535826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.535853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.536053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.536080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.536225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.536251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.536405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.536431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.536600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.536626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.536775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.536802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.536980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.537007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.537194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.537220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.537362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.537388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.537531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.537557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.537696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.537721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.537894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.537924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.538067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.538094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.538269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.538294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.538456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.538482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.538655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.538680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.538863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.068 [2024-07-16 00:28:47.538896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.068 qpair failed and we were unable to recover it. 00:24:56.068 [2024-07-16 00:28:47.539057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.539083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.539235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.539261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.539438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.539464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.539628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.539654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.539798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.539823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.539976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.540003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.540154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.540179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.540376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.540402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.540559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.540585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.540760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.540786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.540966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.540993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.541151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.541177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.541320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.541345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.541511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.541536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.541675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.541701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.541856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.541888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.542032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.542059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.542199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.542225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.542390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.542416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.542560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.542586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.542729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.542755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.542897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.542923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.543124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.543151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.543311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.543336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.543476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.543502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.543666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.543691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.543837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.543863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.544058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.544084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.544258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.544284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.544420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.544446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.544619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.544645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.544819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.544845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.545027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.545052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.545243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.545268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.545423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.545453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.545615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.545641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.545813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.545839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.545987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.546013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.546154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.546179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.546316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.546342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.546478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.069 [2024-07-16 00:28:47.546504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.069 qpair failed and we were unable to recover it. 00:24:56.069 [2024-07-16 00:28:47.546653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.546679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.546849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.546874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.547052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.547077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.547252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.547278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.547431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.547458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.547615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.547641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.547822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.547848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.548013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.548039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.548209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.548235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.548414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.548442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.548588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.548615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.548754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.548780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.548921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.548948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.549090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.549116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.549307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.549333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.549504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.549530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.549673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.549698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.549842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.549868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.550034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.550059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.550217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.550243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.550422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.550448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.550589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.550614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.550784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.550810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.551018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.551045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.551209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.551235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.551397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.551423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.551624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.551651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.551795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.551821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.551962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.551989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.552135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.552162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.552319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.552345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.552504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.552529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.552686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.552712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.552889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.552920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.553072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.553098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.553245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.553270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.553444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.553470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.553615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.553641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.553810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.553836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.553998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.554025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.070 qpair failed and we were unable to recover it. 00:24:56.070 [2024-07-16 00:28:47.554169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.070 [2024-07-16 00:28:47.554195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.554342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.554368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.554539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.554565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.554719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.554744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.554924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.554950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.555104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.555131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.555271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.555297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.555492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.555519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.555687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.555713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.555855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.555897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.556047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.556072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.556250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.556276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.556436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.556462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.556636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.556662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.556802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.556828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.556977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.557003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.557146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.557172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.557330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.557356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.557518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.557543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.557705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.557731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.557884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.557911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.558065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.558091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.558245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.558270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.558407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.558432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.558606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.558631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.558777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.558803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.558972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.558998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.559137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.559162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.559332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.559369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.559538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.559564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.559707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.559732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.559896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.559923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.560064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.560090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.560235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.560266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.560439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.560465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.560639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.560665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.560825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.560850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.561029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.561055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.561233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.561259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.561411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.561437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.561604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.561630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.071 [2024-07-16 00:28:47.561802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.071 [2024-07-16 00:28:47.561829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.071 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.561990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.562016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.562156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.562182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.562334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.562360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.562495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.562521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.562662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.562687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.562843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.562869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.563023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.563048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.563186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.563211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.563389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.563415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.563558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.563584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.563730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.563755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.563896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.563922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.564074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.564100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.564241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.564266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.564430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.564456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.564619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.564645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.564787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.564812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.564982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.565008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.565181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.565207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.565353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.565379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.565521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.565547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.565722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.565747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.565948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.565974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.566135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.566161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.566298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.566323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.566503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.566529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.566673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.566699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.566871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.566903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.567080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.567106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.567250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.567275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.567422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.567448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.567620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.567646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.567794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.567820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.567971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.567997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.072 qpair failed and we were unable to recover it. 00:24:56.072 [2024-07-16 00:28:47.568149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.072 [2024-07-16 00:28:47.568174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.568322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.568348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.568514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.568540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.568701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.568727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.568898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.568923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.569070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.569095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.569240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.569266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.569407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.569433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.569606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.569632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.569780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.569806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.569969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.569995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.570175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.570201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.570344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.570369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.570520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.570546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.570701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.570727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.570866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.570909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.571085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.571111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.571252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.571278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.571423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.571449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.571595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.571621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.571789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.571814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.571968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.571994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.572165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.572191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.572340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.572366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.572507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.572537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.572686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.572712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.572859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.572891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.573035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.573061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.573212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.573238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.573406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.573431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.573602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.573627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.573776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.573802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.573941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.573966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.574140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.574166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.574337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.574363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.574502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.574527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.574685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.574711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.574855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.574886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.575081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.575107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.575253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.575279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.575457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.073 [2024-07-16 00:28:47.575483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.073 qpair failed and we were unable to recover it. 00:24:56.073 [2024-07-16 00:28:47.575632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.575658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.575795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.575820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.575996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.576022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.576173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.576199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.576347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.576374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.576561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.576587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.576729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.576755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.576906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.576934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.577134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.577160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.577305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.577331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.577511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.577537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.577706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.577732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.577890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.577916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.578113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.578138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.578288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.578313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.578470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.578495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.578671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.578697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.578892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.578918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.579117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.579143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.579280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.579305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.579465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.579490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.579658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.579683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.579824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.579850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.580021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.580050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.580201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.580226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.580403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.580429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.580572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.580598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.580772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.580798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.580946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.580972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.581123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.581149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.581286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.581312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.581463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.581488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.581666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.581692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.581836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.581862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.582015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.582040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.582189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.582216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.582364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.582391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.582559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.582585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.582743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.582769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.582925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.582951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.583116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.074 [2024-07-16 00:28:47.583142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.074 qpair failed and we were unable to recover it. 00:24:56.074 [2024-07-16 00:28:47.583310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.583336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.583481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.583507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.583680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.583705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.583892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.583918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.584082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.584108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.584252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.584277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.584442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.584468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.584643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.584668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.584838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.584863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.585024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.585050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.585200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.585225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.585366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.585391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.585561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.585586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.585756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.585782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.585957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.585983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.586134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.586160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.586297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.586323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.586494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.586519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.586673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.586700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.586842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.586868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.587017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.587043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.587187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.587214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.587390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.587420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.587567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.587593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.587764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.587790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.587955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.587981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.588120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.588146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.588284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.588310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.588485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.588511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.588655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.588680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.588822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.588848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.589041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.589067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.589215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.589240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.589379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.589405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.589576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.589602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.589736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.589762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.589909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.589936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.590086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.590111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.590281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.590307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.590447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.590472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.590662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.075 [2024-07-16 00:28:47.590688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.075 qpair failed and we were unable to recover it. 00:24:56.075 [2024-07-16 00:28:47.590834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.590859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.591024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.591050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.591192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.591218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.591388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.591414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.591581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.591606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.591751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.591777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.591953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.591979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.592143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.592169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.592372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.592398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.592538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.592563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.592763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.592788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.592957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.592991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.593148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.593174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.593347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.593372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.593517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.593543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.593714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.593739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.593886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.593911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.594060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.594087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.594230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.594256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.594458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.594484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.594636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.594661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.594799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.594829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.594973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.595000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.595145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.595171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.595343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.595368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.595541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.595567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.595732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.595758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.595931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.595957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.596104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.596129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.596303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.596329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.596498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.596524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.596691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.596716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.596863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.596895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.597034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.597059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.597258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.597284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.597452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.597478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.597648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.597674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.597845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.597870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.598031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.598056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.598229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.598254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.598395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.076 [2024-07-16 00:28:47.598420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.076 qpair failed and we were unable to recover it. 00:24:56.076 [2024-07-16 00:28:47.598577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.598602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.598772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.598798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.598965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.598991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.599158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.599183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.599325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.599352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.599524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.599551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.599700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.599726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.599882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.599908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.600054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.600080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.600238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.600264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.600459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.600485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.600650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.600675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.600838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.600864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.601015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.601041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.601205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.601231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.601409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.601435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.601587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.601614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.601773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.601799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.601954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.601980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.602151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.602176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.602332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.602361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.602509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.602536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.602717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.602743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.602901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.602927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.603099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.603126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.603262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.603288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.603455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.603480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.603638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.603664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.603839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.603864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.604026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.604052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.604250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.604276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.604418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.604443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.604599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.604624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.604764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.077 [2024-07-16 00:28:47.604945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.077 [2024-07-16 00:28:47.604971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.077 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.605142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.605167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.605312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.605338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.605489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.605514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.605654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.605680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.605832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.605858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.606031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.606057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.606212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.606238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.606386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.606412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.606553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.606578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.606723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.606749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.606910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.606936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.607108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.607133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.607274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.607300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.607478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.607504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.607654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.607685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.607857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.607888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.608035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.608061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.608244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.608269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.608439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.608465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.608623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.608648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.608787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.608812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.608965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.608991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.609140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.609166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.609305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.609331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.609474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.609499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.609640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.609671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.609852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.609882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.610027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.610053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.610199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.610225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.610368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.610394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.610554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.610579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.610720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.610745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.610900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.610926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.611106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.611133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.611274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.611299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.611469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.611495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.611700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.611725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.611867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.611900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.612070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.612096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.612259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.612284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.078 [2024-07-16 00:28:47.612430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.078 [2024-07-16 00:28:47.612457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.078 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.612624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.612650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.612823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.612849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.612999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.613024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.613164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.613189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.613330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.613356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.613504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.613529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.613702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.613728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.613902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.613928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.614073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.614098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.614239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.614265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.614436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.614461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.614620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.614647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.614818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.614844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.615040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.615066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.615215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.615240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.615385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.615410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.615558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.615584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.615744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.615770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.615944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.615970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.616122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.616147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.616295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.616322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.616460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.616485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.616623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.616648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.616794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.616820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.616977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.617007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.617206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.617232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.617389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.617415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.617579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.617605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.617747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.617773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.617922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.617948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.618090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.618116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.618254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.618280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.618432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.618457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.618603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.618629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.618820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.618846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.618998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.619024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.619213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.619238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.619380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.619405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.619550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.619576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.619736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.619761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.079 [2024-07-16 00:28:47.619921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.079 [2024-07-16 00:28:47.619947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.079 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.620118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.620144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.620293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.620319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.620467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.620494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.620671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.620697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.620839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.620865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.621043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.621069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.621208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.621233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.621377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.621403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.621542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.621567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.621747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.621772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.621922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.621949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.622093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.622119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.622308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.622334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.622494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.622520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.622688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.622714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.622892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.622919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.623093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.623119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.623293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.623319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.623492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.623518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.623659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.623684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.623835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.623860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.624056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.624082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.624242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.624267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.624411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.624440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.624610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.624636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.624782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.624809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.624961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.624987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.625151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.625177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.625323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.625348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.625495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.625521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.625724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.625749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.625920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.625946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.626097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.626122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.626263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.626288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.626428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.626454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.626646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.626672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.626841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.626866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.080 [2024-07-16 00:28:47.627026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.080 [2024-07-16 00:28:47.627052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.080 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.627188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.627214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.627401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.627426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.627567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.627594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.627769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.627795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.627943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.627969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.628151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.628176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.628312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.628338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.628478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.628504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.628654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.628679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.628820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.628846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.629045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.629071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.629221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.629247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.629409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.629435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.629580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.629606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.629744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.372 [2024-07-16 00:28:47.629769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.372 qpair failed and we were unable to recover it. 00:24:56.372 [2024-07-16 00:28:47.629941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.629967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.630112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.630137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.630309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.630335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.630481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.630506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.630681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.630708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.630900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.630927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.631068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.631094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.631238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.631264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.631456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.631481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.631658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.631683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.631823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.631852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.632034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.632074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.632228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.632255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.632417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.632442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.632580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.632604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.632750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.632776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.632916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.632942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.633100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.633124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.633308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.633333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.633480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.633505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.633675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.633700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.633883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.633909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.634051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.634076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.634236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.634261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.634423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.634449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.634652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.634677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.634842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.634870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.635042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.635068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.635220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.635246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.635382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.635408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.635548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.635573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.635742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.635767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.635920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.635947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.636089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.636114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.636261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.636288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.636462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.636487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.636629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.636654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.636788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.636817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.636996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.637023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.637204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.637230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.373 [2024-07-16 00:28:47.637380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.373 [2024-07-16 00:28:47.637405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.373 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.637549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.637574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.637714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.637740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.637888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.637915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.638076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.638102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.638312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.638337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.638478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.638504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.638675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.638701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.638873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.638910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.639055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.639081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.639227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.639254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.639400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.639426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.639572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.639598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.639738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.639763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.639906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.639933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.640083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.640109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.640306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.640331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.640476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.640502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.640641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.640666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.640864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.640895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.641039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.641065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.641202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.641228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.641405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.641430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.641570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.641596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.641745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.641771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.641921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.641947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.642096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.642122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.642275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.642301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.642439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.642464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.642603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.642629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.642765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.642791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.642937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.642963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.643130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.643155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.643304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.643330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.643483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.643509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.643661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.643686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.643860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.643895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.644043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.644073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.644262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.644287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.644438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.644463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.644631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.644657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.374 qpair failed and we were unable to recover it. 00:24:56.374 [2024-07-16 00:28:47.644802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.374 [2024-07-16 00:28:47.644828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.644972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.644998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.645144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.645170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.645311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.645337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.645508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.645533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.645707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.645733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.645899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.645926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.646062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.646088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.646255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.646281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.646437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.646462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.646615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.646641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.646780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.646805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.646947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.646973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.647142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.647168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.647321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.647348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.647488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.647513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.647674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.647700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.647841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.647866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.648016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.648042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.648224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.648250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.648391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.648417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.648604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.648629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.648796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.648822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.649007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.649034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.649170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.649196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.649338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.649364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.649515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.649541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.649728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.649753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.649895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.649921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.650087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.650112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.650279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.650305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.650467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.650493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.650636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.650661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.650826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.650852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.651005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.651032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.651175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.651200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.651336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.651365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.651516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.651542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.651714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.651740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.651886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.651912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.375 qpair failed and we were unable to recover it. 00:24:56.375 [2024-07-16 00:28:47.652058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.375 [2024-07-16 00:28:47.652083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.652227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.652253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.652422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.652448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.652616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.652641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.652779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.652805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.652989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.653015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.653174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.653199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.653345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.653371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.653524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.653551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.653692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.653718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.653904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.653931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.654100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.654125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.654271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.654296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.654485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.654511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.654680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.654706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.654845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.654871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.655037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.655063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.655216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.655242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.655398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.655424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.655590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.655616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.655772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.655798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.655975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.656001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.656180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.656206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.656369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.656395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.656563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.656589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.656730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.656756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.656903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.656929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.657080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.657106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.657249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.657275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.657438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.657463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.657625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.657650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.657819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.657844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.658006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.658033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.658183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.376 [2024-07-16 00:28:47.658209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.376 qpair failed and we were unable to recover it. 00:24:56.376 [2024-07-16 00:28:47.658351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.658377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.658567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.658593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.658754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.658783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.658957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.658983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.659156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.659182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.659342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.659368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.659541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.659567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.659745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.659772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.659944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.659971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.660132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.660158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.660325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.660351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.660500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.660526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.660699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.660726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.660887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.660913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.661088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.661113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.661252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.661278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.661444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.661470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.661615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.661640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.661780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.661805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.661952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.661979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.662127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.662154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.662330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.662356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.662501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.662527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.662689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.662714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.662887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.662913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.663063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.663088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.663246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.663271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.663460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.663485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.663635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.663661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.663813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.663838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.663982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.664007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.664180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.664205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.664353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.664379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.664524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.664550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.664703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.664729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.664889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.664916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.665087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.665113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.665291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.665316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.665456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.665482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.665656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.665681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.377 [2024-07-16 00:28:47.665857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.377 [2024-07-16 00:28:47.665899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.377 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.666053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.666079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.666222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.666251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.666418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.666443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.666613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.666639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.666813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.666839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.667014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.667040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.667187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.667213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.667391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.667416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.667556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.667581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.667776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.667801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.667949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.667975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.668138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.668164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.668321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.668347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.668536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.668562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.668710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.668735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.668893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.668919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.669067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.669093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.669269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.669295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.669436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.669462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.669611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.669637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.669810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.669835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.670035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.670061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.670214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.670240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.670401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.670427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.670596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.670622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.670793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.670819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.670980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.671006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.671153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.671179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.671328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.671354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.671526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.671552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.671693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.671882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.671908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.672070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.672096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.672234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.672259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.672422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.672449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.672615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.672641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.672807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.672833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.672989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.673015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.673167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.673193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.673331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.673357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.378 [2024-07-16 00:28:47.673498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.378 [2024-07-16 00:28:47.673523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.378 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.673693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.673718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.673889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.673915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.674081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.674107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.674283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.674308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.674471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.674497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.674685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.674711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.674860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.674904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.675053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.675080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.675224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.675250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.675398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.675423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.675604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.675629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.675819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.675844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.676020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.676046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.676237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.676262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.676412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.676438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.676575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.676601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.676768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.676793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.676940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.676966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.677119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.677144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.677293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.677318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.677494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.677519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.677700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.677725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.677862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.677894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.678042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.678068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.678242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.678268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.678412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.678438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.678602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.678628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.678792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.678821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.678976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.679002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.679183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.679208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.679348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.679374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.679553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.679578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.679747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.679773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.679919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.679945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.680124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.680149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.680316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.680342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.680493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.680518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.680689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.680715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.680873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.680903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.681048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.681073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.681230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.379 [2024-07-16 00:28:47.681256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.379 qpair failed and we were unable to recover it. 00:24:56.379 [2024-07-16 00:28:47.681436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.681462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.681615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.681640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.681784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.681811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.681986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.682013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.682204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.682230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.682402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.682428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.682596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.682621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.682761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.682787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.682939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.682965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.683110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.683135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.683273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.683299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.683450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.683475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.683627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.683653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.683830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.683856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.684028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.684054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.684225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.684250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.684410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.684436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.684577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.684603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.684771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.684796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.684944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.684970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.685132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.685157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.685332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.685357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.685503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.685529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.685670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.685695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.685865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.685895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.686061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.686087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.686251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.686280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.686422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.686448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.686591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.686616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.686784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.686810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.686969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.686995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.687194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.687220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.687369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.687395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.687548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.687573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.687729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.687755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.687932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.687959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.688130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.688155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.688314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.688340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.688481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.688507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.688653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.688680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.380 [2024-07-16 00:28:47.688874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.380 [2024-07-16 00:28:47.688904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.380 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.689056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.689082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.689250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.689276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.689450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.689475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.689609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.689634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.689815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.689841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.689999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.690026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.690203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.690229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.690371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.690396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.690568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.690594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.690738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.690764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.690908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.690934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.691079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.691104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.691258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.691284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.691482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.691508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.691644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.691670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.691818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.691843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.692011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.692037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.692206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.692231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.692387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.692412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.692580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.692606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.692750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.692776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.692945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.692971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.693140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.693165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.693307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.693332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.693490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.693517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.693668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.693698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.693857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.693888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.694032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.694058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.694200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.694226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.694388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.694413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.694576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.694602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.694778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.694803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.694963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.694989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.695153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.695179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.381 qpair failed and we were unable to recover it. 00:24:56.381 [2024-07-16 00:28:47.695367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.381 [2024-07-16 00:28:47.695392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.695542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.695569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.695742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.695767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.695914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.695940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.696105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.696131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.696324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.696350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.696508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.696534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.696682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.696707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.696899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.696934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.697079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.697105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.697254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.697280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.697453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.697479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.697638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.697663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.697799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.697824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.698005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.698032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.698206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.698232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.698377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.698403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.698562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.698588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.698764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.698790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.698938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.698964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.699110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.699136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.699306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.699332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.699471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.699496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.699665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.699691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.699833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.699859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.700052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.700078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.700269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.700295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.700452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.700477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.700622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.700648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.700800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.700826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.700998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.701024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.701162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.701192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.701336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.701362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.701505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.701530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.701703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.701729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.701872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.701916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.702067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.702094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.702289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.702314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.702475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.702500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.702687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.702713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.702887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.382 [2024-07-16 00:28:47.702913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.382 qpair failed and we were unable to recover it. 00:24:56.382 [2024-07-16 00:28:47.703089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.703115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.703258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.703283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.703462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.703487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.703674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.703700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.703849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.703882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.704053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.704079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.704223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.704248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.704417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.704442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.704592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.704617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.704757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.704782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.704951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.704978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.705130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.705156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.705329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.705354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.705501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.705527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.705697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.705723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.705885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.705912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.706083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.706109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.706274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.706300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.706455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.706481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.706631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.706656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.706828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.706854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.707009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.707036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.707208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.707233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.707404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.707430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.707575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.707600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.707760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.707785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.707932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.707958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.708114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.708139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.708287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.708312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.708479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.708504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.708680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.708710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.708888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.708915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.709085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.709111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.709279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.709304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.709445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.709470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.709608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.709634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.709787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.709812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.709951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.709978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.710124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.710150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.710327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.710353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.710492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.383 [2024-07-16 00:28:47.710517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.383 qpair failed and we were unable to recover it. 00:24:56.383 [2024-07-16 00:28:47.710770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.710795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.710965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.710991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.711168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.711195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.711346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.711371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.711559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.711584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.711766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.711792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.711940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.711965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.712116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.712142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.712294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.712320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.712461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.712487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.712653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.712678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.712822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.712848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.713014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.713040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.713217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.713243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.713394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.713421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.713600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.713627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.713774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.713800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.713972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.713998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.714171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.714196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.714357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.714382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.714555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.714581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.714723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.714749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.714921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.714946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.715122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.715147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.715292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.715319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.715487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.715512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.715689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.715716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.715891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.715917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.716068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.716093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.716260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.716290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.716459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.716485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.716627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.716652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.716814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.716840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.717042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.717067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.717212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.717238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.717408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.717434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.717601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.717626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.717768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.717793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.717941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.717968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.718104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.718130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.718272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.384 [2024-07-16 00:28:47.718298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.384 qpair failed and we were unable to recover it. 00:24:56.384 [2024-07-16 00:28:47.718439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.718464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.718615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.718640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.718834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.718859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.719008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.719034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.719175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.719201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.719353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.719379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.719581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.719606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.719752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.719779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.719928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.719954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.720127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.720152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.720296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.720322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.720472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.720497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.720690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.720715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.720889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.720915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.721083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.721108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.721278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.721304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.721452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.721478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.721636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.721662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.721819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.721844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.721996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.722023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.722175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.722201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.722343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.722369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.722539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.722565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.722707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.722733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.722897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.722923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.723088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.723114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.723283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.723309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.723453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.723480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.723622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.723652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.723854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.723884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.724035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.724062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.724204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.724230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.724376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.724401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.724543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.385 [2024-07-16 00:28:47.724568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.385 qpair failed and we were unable to recover it. 00:24:56.385 [2024-07-16 00:28:47.724728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.724753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.724919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.724945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.725092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.725117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.725255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.725280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.725457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.725482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.725627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.725652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.725826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.725852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.726004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.726029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.726178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.726203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.726350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.726376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.726528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.726554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.726720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.726745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.726893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.726919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.727069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.727096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.727259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.727284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.727471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.727497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.727641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.727666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.727807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.727833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.727979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.728005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.728166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.728192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.728337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.728364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.728541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.728567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.728756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.728782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.728925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.728951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.729092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.729117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.729258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.729285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.729473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.729499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.729650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.729675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.729831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.729856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.730050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.730075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.730239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.730264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.730461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.730487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.730653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.730678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.730829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.730855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.731015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.731045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.731220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.731245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.731417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.731442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.731612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.731638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.731803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.731828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.732002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.732029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.732164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.386 [2024-07-16 00:28:47.732190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.386 qpair failed and we were unable to recover it. 00:24:56.386 [2024-07-16 00:28:47.732378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.732403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.732543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.732568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.732748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.732774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.732943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.732969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.733110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.733136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.733285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.733310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.733448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.733474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.733629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.733655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.733812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.733838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.733986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.734012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.734172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.734198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.734376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.734402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.734544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.734570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.734712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.734738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.734912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.734938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.735130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.735155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.735313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.735338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.735480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.735506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.735678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.735703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.735892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.735918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.736072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.736098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.736236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.736261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.736421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.736446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.736619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.736645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.736806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.736831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.737014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.737041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.737187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.737212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.737411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.737437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.737587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.737613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.737750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.737775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.737918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.737944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.738102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.738128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.738294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.738319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.738461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.738490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.738641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.738666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.738832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.738857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.739026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.739051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.739193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.739219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.739388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.739413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.739563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.739590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.387 [2024-07-16 00:28:47.739748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.387 [2024-07-16 00:28:47.739773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.387 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.739926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.739954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.740125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.740151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.740297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.740323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.740476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.740502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.740665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.740691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.740850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.740892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.741075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.741100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.741272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.741297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.741467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.741492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.741668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.741695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.741849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.741874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.742041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.742067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.742217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.742243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.742444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.742469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.742610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.742636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.742778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.742805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.742970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.742996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.743156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.743182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.743346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.743372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.743537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.743563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.743739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.743764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.743915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.743941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.744078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.744104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.744242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.744268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.744446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.744471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.744632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.744658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.744802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.744827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.744988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.745014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.745215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.745241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.745380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.745406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.745581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.745606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.745782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.745807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.745946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.745976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.746119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.746145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.746284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.746310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.746459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.746485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.746633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.746660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.746811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.746838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.747008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.747034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.747220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.747245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.388 [2024-07-16 00:28:47.747391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.388 [2024-07-16 00:28:47.747416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.388 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.747582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.747608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.747748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.747773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.747945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.747971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.748126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.748151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.748310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.748335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.748515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.748541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.748690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.748718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.748857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.748894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.749064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.749089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.749227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.749253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.749443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.749468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.749645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.749672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.749841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.749867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.750016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.750042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.750210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.750235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.750385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.750411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.750553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.750579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.750722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.750748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.750912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.750938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.751086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.751111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.751302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.751327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.751463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.751489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.751661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.751687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.751891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.751917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.752059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.752084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.752260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.752286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.752471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.752497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.752652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.752677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.752847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.752872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.753078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.753104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.753291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.753317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.753505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.753534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.753685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.753710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.389 [2024-07-16 00:28:47.753893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.389 [2024-07-16 00:28:47.753919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.389 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.754062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.754088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.754230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.754256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.754429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.754455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.754600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.754626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.754768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.754793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.754956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.754982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.755145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.755171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.755358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.755384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.755530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.755557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.755704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.755729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.755909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.755935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.756079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.756104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.756290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.756316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.756460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.756487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.756652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.756678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.756814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.756840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.756991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.757017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.757170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.757195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.757330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.757355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.757503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.757528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.757688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.757713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.757858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.757889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.758062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.758088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.758232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.758257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.758398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.758423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.758599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.758626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.758773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.758799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.758957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.758983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.759169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.759195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.759335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.759360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.759529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.759555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.759700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.759725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.759867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.759897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.760053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.760079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.760219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.760244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.760401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.760427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.760589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.760615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.760818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.760847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.761020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.761047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.761209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.761235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.390 [2024-07-16 00:28:47.761387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.390 [2024-07-16 00:28:47.761413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.390 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.761572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.761597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.761791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.761817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.761994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.762019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.762162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.762188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.762329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.762354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.762534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.762560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.762708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.762734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.762894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.762920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.763065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.763090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.763267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.763294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.763449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.763475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.763635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.763660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.763803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.763828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.763983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.764009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.764262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.764287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.764425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.764450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.764601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.764626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.764792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.764817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.764958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.764984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.765152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.765177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.765331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.765356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.765503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.765529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.765704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.765730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.765872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.765902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.766073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.766099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.766268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.766293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.766433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.766460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.766657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.766683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.766830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.766855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.767032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.767058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.767197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.767222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.767401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.767427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.767578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.767603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.767772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.767798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.767952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.767978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.768138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.768163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.768367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.768392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.768564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.768590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.768769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.768795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.768939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.768965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.391 qpair failed and we were unable to recover it. 00:24:56.391 [2024-07-16 00:28:47.769110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.391 [2024-07-16 00:28:47.769136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.769390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.769417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.769589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.769614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.769759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.769785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.769935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.769961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.770112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.770137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.770308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.770333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.770469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.770494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.770743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.770769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.770912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.770938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.771118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.771144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.771290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.771316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.771504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.771529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.771669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.771695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.771849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.771881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.772035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.772062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.772234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.772260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.772449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.772475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.772620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.772647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.772793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.772819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.772963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.772990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.773140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.773166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.773333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.773358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.773536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.773566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.773739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.773765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.773916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.773942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.774089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.774115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.774318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.774344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.774483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.774509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.774678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.774704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.774957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.774983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.775153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.775179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.775331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.775357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.775526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.775552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.775699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.775724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.775903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.775929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.776064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.776090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.776240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.776266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.776429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.776454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.776626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.776652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.776820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.776846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.392 qpair failed and we were unable to recover it. 00:24:56.392 [2024-07-16 00:28:47.777023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.392 [2024-07-16 00:28:47.777050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.777210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.777235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.777396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.777421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.777562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.777588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.777753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.777779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.777948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.777974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.778160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.778185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.778334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.778359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.778506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.778531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.778683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.778708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.778897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.778923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.779079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.779105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.779292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.779317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.779485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.779510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.779662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.779688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.779874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.779905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.780077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.780102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.780269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.780294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.780452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.780477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.780642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.780667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.780817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.780842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.781035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.781061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.781220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.781251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.781423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.781449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.781618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.781643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.781826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.781852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.782001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.782027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.782170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.782196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.782365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.782391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.782543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.782568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.782726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.782752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.782903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.782929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.783082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.783107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.783294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.783319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.783493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.783518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.783699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.783725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.783899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.783925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.784090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.784116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.784260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.784287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.393 [2024-07-16 00:28:47.784466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.393 [2024-07-16 00:28:47.784492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.393 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.784672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.784697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.784847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.784873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.785028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.785054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.785207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.785234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.785408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.785434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.785606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.785631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.785771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.785796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.785954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.785981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.786125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.786151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.786313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.786339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.786515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.786542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.786710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.786736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.786886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.786912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.787048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.787073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.787226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.787252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.787402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.787428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.787567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.787592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.787735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.787761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.787918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.787944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.788083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.788108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.788271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.788296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.788465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.788490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.788641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.788670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.788827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.788852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.789033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.789059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.789236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.789263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.789454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.789480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.789638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.789663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.789828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.789853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.790012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.790039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.790244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.790269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.790524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.790549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.790721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.394 [2024-07-16 00:28:47.790746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.394 qpair failed and we were unable to recover it. 00:24:56.394 [2024-07-16 00:28:47.790897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.790923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.791081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.791107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.791256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.791281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.791428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.791454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.791604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.791630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.791804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.791830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.791987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.792013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.792191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.792217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.792363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.792389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.792550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.792575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.792714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.792739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.792903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.792930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.793082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.793108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.793265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.793291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.793542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.793567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.793711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.793738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.793889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.793915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.794103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.794129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.794285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.794310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.794478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.794504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.794664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.794690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.794850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.794880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.795049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.795075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.795327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.795352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.795516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.795542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.795683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.795708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.795857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.795888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.796062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.796088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.796226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.796251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.796423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.796453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.796594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.796620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.796789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.796815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.796975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.797001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.797254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.797280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.797419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.797444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.797616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.797642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.797779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.797805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.797961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.797988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.798240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.798265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.798408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.798434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.395 [2024-07-16 00:28:47.798600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.395 [2024-07-16 00:28:47.798626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.395 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.798796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.798821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.798998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.799024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.799185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.799211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.799369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.799394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.799547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.799572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.799745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.799771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.799938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.799964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.800112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.800139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.800314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.800340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.800491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.800517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.800667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.800693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.800859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.800890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.801048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.801074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.801224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.801250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.801401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.801426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.801603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.801629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.801768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.801793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.801937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.801963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.802111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.802137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.802295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.802320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.802492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.802518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.802695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.802722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.802883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.802908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.803095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.803121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.803289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.803315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.803486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.803512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.803680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.803705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.803884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.803910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.804080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.804110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.804262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.804287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.804484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.804510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.804675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.804701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.804872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.804913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.805086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.805111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.805270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.805295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.805450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.805475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.805648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.805673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.805838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.805864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.806014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.806039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.806180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.806206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.396 [2024-07-16 00:28:47.806363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.396 [2024-07-16 00:28:47.806388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.396 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.806567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.806594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.806806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.806831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.807089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.807115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.807253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.807279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.807465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.807491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.807647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.807673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.807816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.808002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.808029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.808176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.808203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.808345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.808370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.808528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.808553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.808729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.808754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.808917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.808943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.809121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.809146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.809304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.809330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.809501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.809527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.809665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.809690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.809845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.809871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.810020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.810045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.810223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.810248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.810413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.810438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.810609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.810635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.810777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.810802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.810953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.810980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.811121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.811147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.811312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.811338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.811492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.811519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.811655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.811685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.811857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.811888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.812032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.812057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.812198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.812223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.812411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.812437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.812627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.812652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.812829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.812855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.813060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.813102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.813284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.813311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.813456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.813482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.813655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.813682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.813823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.813849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.814026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.397 [2024-07-16 00:28:47.814053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.397 qpair failed and we were unable to recover it. 00:24:56.397 [2024-07-16 00:28:47.814201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.814228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.814409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.814436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.814586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.814612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.814760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.814787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.814934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.814961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.815120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.815146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.815328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.815355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.815503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.815529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.815686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.815713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.815894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.815922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.816098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.816124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.816273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.816299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.816470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.816496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.816659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.816685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.816862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.816906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.817056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.817083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.817252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.817278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.817457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.817483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.817641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.817667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.817835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.817860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.818011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.818037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.818183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.818210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.818384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.818411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.818555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.818582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.818732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.818759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.818932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.818959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.819137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.819164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.819333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.819363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.819517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.819545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.819690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.819716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.819853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.819884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.820057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.820083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.820262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.820288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.820433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.820459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.398 [2024-07-16 00:28:47.820606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.398 [2024-07-16 00:28:47.820633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.398 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.820804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.820831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.820982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.821009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.821190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.821216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.821360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.821387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.821533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.821558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.821726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.821752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.821907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.821934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.822105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.822132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.822298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.822324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.822472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.822498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.822667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.822692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.822863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.822902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.823077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.823104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.823281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.823307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.823453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.823479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.823651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.823677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.823857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.823891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.824056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.824083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.824224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.824251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cec000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.824397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.824426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.824588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.824614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.824757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.824783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.824945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.824971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.825125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.825150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.825311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.825337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.825503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.825529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.825703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.825729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.825903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.825930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.826120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.826146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.826322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.826347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.826517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.826542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.826680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.826706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.826845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.826881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.827027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.827053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.827218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.827244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.827406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.827431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.827573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.827598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.827762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.827787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.827975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.828001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.828186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.828212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.399 qpair failed and we were unable to recover it. 00:24:56.399 [2024-07-16 00:28:47.828355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.399 [2024-07-16 00:28:47.828380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.828532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.828557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.828700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.828726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.828881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.828907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.829074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.829099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.829246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.829272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.829437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.829463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.829605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.829630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.829788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.829814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.829967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.829993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.830142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.830168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.830355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.830381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.830577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.830603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.830754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.830780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.830934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.830960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.831147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.831173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.831313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.831340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.831539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.831565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.831706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.831732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.831906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.831932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.832073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.832098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.832274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.832300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.832439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.832464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.832620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.832646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.832784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.832809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.832986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.833012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.833160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.833187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.833350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.833376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.833558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.833583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.833754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.833780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.833959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.833985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.834132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.834158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.834329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.834358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.834494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.834520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.834665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.834692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.834832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.834858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.835008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.835034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.835182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.835207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.835356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.835381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.835526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.835552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.835748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.400 [2024-07-16 00:28:47.835774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.400 qpair failed and we were unable to recover it. 00:24:56.400 [2024-07-16 00:28:47.835925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.835950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.836110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.836136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.836274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.836299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.836470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.836495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.836645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.836671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.836837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.836863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.837012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.837038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.837239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.837265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.837409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.837435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.837579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.837604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.837747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.837772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.837933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.837959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.838136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.838162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.838321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.838347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.838500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.838525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.838688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.838714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.838890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.838917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.839061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.839087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.839235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.839260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.839417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.839442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.839590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.839616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.839760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.839786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.839950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.839976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.840122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.840149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.840307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.840333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.840477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.840503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.840669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.840695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.840859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.840889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.841073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.841099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.841269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.841294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.841439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.841464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.841630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.841660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.841830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.841855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.842018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.842044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.842211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.842237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.842413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.842438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.842588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.842614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.842792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.842818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.842989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.843014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.843188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.843214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.843351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.401 [2024-07-16 00:28:47.843376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.401 qpair failed and we were unable to recover it. 00:24:56.401 [2024-07-16 00:28:47.843566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.843593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.843777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.843803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.843952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.843980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.844119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.844145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.844347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.844372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.844541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.844566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.844754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.844779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.844953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.844979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.845124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.845150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.845293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.845318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.845509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.845535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.845706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.845731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.845885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.845911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.846072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.846097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.846241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.846267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.846428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.846454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.846587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.846612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.846766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.846791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.846946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.846972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.847146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.847172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.847348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.847374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.847517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.847543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.847715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.847741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.847917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.847943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.848106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.848132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.848278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.848304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.848447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.848472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.848618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.848645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.848783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.848809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.848981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.849007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.849176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.849206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.849365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.849391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.849564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.849589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.849777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.849802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.849974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.402 [2024-07-16 00:28:47.850000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.402 qpair failed and we were unable to recover it. 00:24:56.402 [2024-07-16 00:28:47.850147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.850173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.850334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.850359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.850505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.850531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.850697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.850723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.850897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.850922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.851067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.851093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.851263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.851288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.851437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.851463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.851629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.851655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.851800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.851826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.851969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.851995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.852138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.852163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.852364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.852389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.852565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.852591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.852760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.852786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.852922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.852948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.853087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.853113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.853278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.853303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.853446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.853472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.853620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.853645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.853787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.853813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.853983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.854009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.854159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.854185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.854356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.854381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.854519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.854545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.854718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.854745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.854891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.854916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.855056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.855082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.855246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.855271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.855418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.855443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.855593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.855618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.855762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.855788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.855975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.856001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.856150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.856175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.856351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.856377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.856525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.856554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.856700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.856726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.856919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.856945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.857094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.857120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.857257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.857283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.857429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.403 [2024-07-16 00:28:47.857455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.403 qpair failed and we were unable to recover it. 00:24:56.403 [2024-07-16 00:28:47.857611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.857638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.857797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.857823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.857962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.857988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.858132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.858157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.858294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.858320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.858485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.858511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.858646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.858671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.858826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.858851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.859002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.859028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.859228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.859253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.859408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.859434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.859592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.859617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.859805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.859831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.860022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.860048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.860196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.860222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.860384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.860409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.860573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.860598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.860769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.860794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.860959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.860985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.861142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.861168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.861341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.861366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.861530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.861555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.861707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.861732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.861920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.861946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.862117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.862143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.862281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.862306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.862481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.862506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.862672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.862698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.862857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.862893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.863049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.863074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.863215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.863240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.863429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.863455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.863615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.863640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.863808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.863834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.863988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.864015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.864158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.864184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.864326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.864352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.864525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.864551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.864711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.864737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.864912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.864938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.865093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.404 [2024-07-16 00:28:47.865119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.404 qpair failed and we were unable to recover it. 00:24:56.404 [2024-07-16 00:28:47.865278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.865304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.865484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.865510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.865648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.865674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.865835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.865860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.866066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.866092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.866254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.866279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.866468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.866494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.866642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.866668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.866839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.866865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.867036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.867062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.867232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.867257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.867400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.867427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.867566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.867592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.867758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.867783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.867934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.867961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.868105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.868130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.868319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.868345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.868518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.868544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.868720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.868745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.868910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.868936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.869110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.869139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.869292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.869319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.869497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.869523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.869666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.869692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.869841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.869867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.870040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.870065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.870237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.870263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.870409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.870435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.870607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.870633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.870801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.870827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.871002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.871028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.871166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.871192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.871378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.871404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.871578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.871604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.871760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.871786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.871938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.871964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.872103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.872129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.872299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.872324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.872481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.872506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.872673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.872699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.405 [2024-07-16 00:28:47.872839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.405 [2024-07-16 00:28:47.872864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.405 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.873026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.873052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.873192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.873218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.873371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.873397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.873539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.873564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.873718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.873743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.873898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.873924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.874116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.874142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.874293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.874318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.874462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.874487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.874660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.874686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.874841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.874868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.875012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.875038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.875182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.875207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.875354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.875380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.875515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.875541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.875693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.875718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.875880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.875906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.876046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.876072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.876242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.876267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.876443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.876473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.876622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.876647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.876782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.876808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.876992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.877018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.877185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.877210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.877357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.877382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.877551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.877576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.877752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.877777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.877922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.877948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.878123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.878149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.878313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.878339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.878490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.878517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.878694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.878720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.878866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.878897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.879069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.879095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.879248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.879273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.406 [2024-07-16 00:28:47.879448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.406 [2024-07-16 00:28:47.879473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.406 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.879615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.879640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.879778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.879804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.879973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.879998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.880187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.880213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.880386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.880412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.880573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.880599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.880788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.880814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.880998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.881024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.881194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.881219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.881398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.881423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.881570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.881596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.881747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.881772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.881946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.881972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.882124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.882150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.882326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.882352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.882506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.882531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.882680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.882706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.882848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.882874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.883023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.883049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.883210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.883236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.883393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.883418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.883587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.883613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.883786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.883811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.884000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.884030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.884198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.884224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.884383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.884409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.884550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.884576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.884774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.884800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.884988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.885014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.885187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.885212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.885352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.885377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.885539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.885564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.885737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.885763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.885905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.885932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.886093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.886119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.886309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.886334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.886496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.886521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.886675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.886701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.886840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.407 [2024-07-16 00:28:47.886866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.407 qpair failed and we were unable to recover it. 00:24:56.407 [2024-07-16 00:28:47.887038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.887063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.887210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.887235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.887392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.887418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.887582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.887607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.887747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.887772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.887943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.887969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.888113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.888139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.888300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.888326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.888476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.888503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.888675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.888700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.888843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.888869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.889058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.889084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.889266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.889292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.889435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.889460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.889602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.889627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.889778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.889804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.889985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.890012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.890180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.890206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.890382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.890408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.890558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.890585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.890731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.890756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.890932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.890958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.891104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.891129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.891267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.891293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.891467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.891496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.891669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.891695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.891872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.891902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.892060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.892085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.892228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.892254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.892411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.892436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.892608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.892635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.892775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.892801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.892951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.892977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.893162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.893188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.893376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.893401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.893542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.893568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.893743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.893768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.893957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.893984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.894164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.894190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.894335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.894361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.894543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.408 [2024-07-16 00:28:47.894569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.408 qpair failed and we were unable to recover it. 00:24:56.408 [2024-07-16 00:28:47.894709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.894734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.894895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.894921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.895067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.895092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.895232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.895258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.895438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.895464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.895601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.895627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.895781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.895807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.895973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.895999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.896137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.896163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.896316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.896341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.896515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.896541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.896678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.896703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.896844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.896870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.897041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.897067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.897240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.897266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.897429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.897455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.897591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.897617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.897783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.897809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.897980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.898006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.898147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.898172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.898328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.898355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.898497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.898523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.898662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.898688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.898838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.898868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.899016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.899042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.899220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.899246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.899409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.899435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.899625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.899651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.899804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.899830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.899987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.900013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.900154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.900179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.900356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.900382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.900554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.900579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.900761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.900786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.900949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.900976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.901137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.901163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.901307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.901333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.901503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.901529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.901684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.901710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.901852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.901889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.409 [2024-07-16 00:28:47.902041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.409 [2024-07-16 00:28:47.902068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.409 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.902245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.902271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.902415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.902440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.902597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.902623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.902782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.902809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.902977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.903004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.903178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.903204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.903346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.903373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.903546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.903572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.903719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.903745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.903915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.903941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.904131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.904157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.904308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.904334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.904485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.904510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.904654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.904681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.904828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.904855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.905049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.905075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.905237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.905263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.905428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.905453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.905593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.905620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.905807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.905832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.906016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.906042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.906184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.906210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.906347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.906377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.906519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.906544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.906711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.906736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.906892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.906918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.907078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.907104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.907252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.907278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.907420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.907446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.907590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.907616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.907785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.907810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.907959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.907985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.908159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.908185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.908328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.908353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.908545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.908571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.908735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.908761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.908910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.908936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.909085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.909110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.909254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.909280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.909424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.909450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.410 [2024-07-16 00:28:47.909653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.410 [2024-07-16 00:28:47.909679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.410 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.909820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.909846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.910006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.910033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.910204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.910229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.910390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.910415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.910593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.910618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.910790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.910815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.910966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.910992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.911138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.911163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.911310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.911336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.911506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.911531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.911675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.911700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.911864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.911895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.912039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.912065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.912218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.912243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.912387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.912412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.912593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.912618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.912755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.912781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.912948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.912974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.913153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.913179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.913322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.913347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.913502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.913528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.913666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.913695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.913839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.913865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.914044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.914070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.914209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.914234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.914378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.914404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.914543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.914569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.914769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.914795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.914941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.914968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.915110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.915137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.915289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.915315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.915456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.915481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.915623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.915650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.411 [2024-07-16 00:28:47.915820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.411 [2024-07-16 00:28:47.915846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.411 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.916002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.916029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.916179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.916205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.916375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.916401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.916571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.916596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.916735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.916761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.916940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.916968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.917120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.917146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.917311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.917336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.917507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.917533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.917670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.917696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.917881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.917907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.918097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.918122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.918283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.918310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.918458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.918484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.918658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.918684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.918857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.918888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.919043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.919069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.919243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.919270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.919436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.919462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.919625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.919651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.919830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.919856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.920005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.920030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.920170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.920196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.920366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.920391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.920536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.920562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.920765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.920790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.920936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.920963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.921117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.921146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.921310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.921336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.921513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.921539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.921681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.921707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.921887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.921913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.922075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.922100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.922263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.922289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.922467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.922493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.922664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.922690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.922829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.922855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.923059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.923099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.923262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.923289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.923440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.923467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.412 [2024-07-16 00:28:47.923605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.412 [2024-07-16 00:28:47.923630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.412 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.923797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.923823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.923993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.924021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.924175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.924202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.924355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.924381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.924551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.924577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.924731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.924757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.924898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.924925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.925065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.925091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.925236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.925263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.925442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.925469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.925630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.925656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.925801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.925827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.925980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.926007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.926198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.926239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.926405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.926443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.926623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.926650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.926797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.926823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.926979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.927004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.927173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.927198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.927349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.927375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.927540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.927564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.927737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.927763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.927915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.927942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.928105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.928130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.928303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.928329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.928494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.928519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.928657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.928682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.928857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.928887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.929039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.929064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.929236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.929262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.929401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.929427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.929588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.929613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.929765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.929790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.929941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.929966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.930132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.930170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.930351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.930378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.930526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.930552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.930720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.930746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.930904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.930930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.931076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.413 [2024-07-16 00:28:47.931279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.413 [2024-07-16 00:28:47.931310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.413 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.931461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.931486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.931635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.931660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.931803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.931830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.931978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.932004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.932180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.932205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.932344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.932370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.932560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.932585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.932729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.932755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.932923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.932962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.933154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.933193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.933353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.933381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.933543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.933569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.933764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.933792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.934004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.934031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.934175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.934201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.934373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.934399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.934598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.934623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf4000b90 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.934789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.934817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.935010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.935036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.935186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.935211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.935349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.935374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.935518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.935543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.935690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.935715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.935886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.935911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.936063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.936088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.936259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.936285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.936432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.936463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.936666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.936692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.936844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.936869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.937035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.937061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.937214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.937240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.937385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.937410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.937571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.937596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.937778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.937804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.937975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.938001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.938156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.938181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.938379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.938405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.938553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.938578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.938769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.938795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.938966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.938991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.939147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.414 [2024-07-16 00:28:47.939173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.414 qpair failed and we were unable to recover it. 00:24:56.414 [2024-07-16 00:28:47.939324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.939350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.939526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.939552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.939696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.939721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.939896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.939921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.940081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.940107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.940279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.940304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.940465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.940490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.940650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.940676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.940854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.940884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.941064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.941089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.941277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.941302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.941449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.941475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.941646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.941676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.941848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.941874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.942031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.942056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.942224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.942249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.942398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.942424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.942567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.942591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.942728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.942753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.942915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.942941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.943130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.943155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.943316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.943341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.943513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.943539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.943677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.943702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.943850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.943881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.944039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.944065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.944217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.944243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.944390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.944415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.944591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.944616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.944775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.944801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.944985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.945012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.945158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.945183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.945353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.945378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.945524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.945550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.945709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.945735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.945886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.415 [2024-07-16 00:28:47.945912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.415 qpair failed and we were unable to recover it. 00:24:56.415 [2024-07-16 00:28:47.946053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.946078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.946243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.946269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.946424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.946450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.946588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.946613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.946753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.946779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.946936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.946962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.947110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.947135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.947289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.947315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.947493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.947518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.947667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.947692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.947842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.947868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.948025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.948051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.948223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.948248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.948447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.948473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.948648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.948674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.948824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.948850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.949024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.949050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.949200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.949239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.949401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.949428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.949577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.949602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.949775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.949800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.950001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.950027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.950172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.950203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.950350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.950376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.950532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.950557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.950735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.950760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.950948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.950975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.951134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.416 [2024-07-16 00:28:47.951159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.416 qpair failed and we were unable to recover it. 00:24:56.416 [2024-07-16 00:28:47.951330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.417 [2024-07-16 00:28:47.951354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.417 qpair failed and we were unable to recover it. 00:24:56.417 [2024-07-16 00:28:47.951527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.417 [2024-07-16 00:28:47.951552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.417 qpair failed and we were unable to recover it. 00:24:56.417 [2024-07-16 00:28:47.951697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.417 [2024-07-16 00:28:47.951722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.417 qpair failed and we were unable to recover it. 00:24:56.417 [2024-07-16 00:28:47.951887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.417 [2024-07-16 00:28:47.951914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.417 qpair failed and we were unable to recover it. 00:24:56.417 [2024-07-16 00:28:47.952077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.417 [2024-07-16 00:28:47.952102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.417 qpair failed and we were unable to recover it. 00:24:56.417 [2024-07-16 00:28:47.952296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.417 [2024-07-16 00:28:47.952322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.417 qpair failed and we were unable to recover it. 00:24:56.417 [2024-07-16 00:28:47.952495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.417 [2024-07-16 00:28:47.952521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.417 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.952698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.952724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.952901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.952927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.953080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.953105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.953249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.953275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.953416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.953441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.953621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.953645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.953807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.953833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.953982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.954008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.954156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.954182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.954387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.954417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.954599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.954625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.954768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.954793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.954964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.954990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.955134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.955159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.955305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.955330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.689 qpair failed and we were unable to recover it. 00:24:56.689 [2024-07-16 00:28:47.955505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.689 [2024-07-16 00:28:47.955530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.955704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.955728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.955886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.955911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.956065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.956091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.956242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.956267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.956443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.956468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.956647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.956672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.956840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.956866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.957018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.957043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.957183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.957207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.957354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.957380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.957553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.957578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.957725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.957750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.957896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.957922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.958102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.958128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.958306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.958331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.958501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.958527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.958700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.958726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.958870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.958906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.959061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.959087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.959263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.959299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.959445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.959471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.959638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.959663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.959838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.959862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.960028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.960053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.960195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.960220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.960383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.960409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.960579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.960604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.960742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.960768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.960946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.960972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.961120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.961146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.961295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.961321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.961490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.961516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.961657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.961683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.961839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.961864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.962012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.962042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.962212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.962237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.962388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.962414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.962562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.962589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.962756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.962782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.690 [2024-07-16 00:28:47.962933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.690 [2024-07-16 00:28:47.962960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.690 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.963104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.963128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.963270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.963296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.963450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.963476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.963617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.963643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.963803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.963828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.963971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.963997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.964167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.964193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.964361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.964387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.964535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.964561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.964727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.964753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.964896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.964923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.965092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.965118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.965285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.965310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.965501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.965527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.965698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.965724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.965907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.965933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.966082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.966109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.966262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.966287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.966484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.966510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.966665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.966691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.966829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.966855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3f0 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.967031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.967070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.967235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.967263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.967411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.967438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.967577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.967602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.967778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.967805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.967953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.967980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.968125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.968150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.968306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.968332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.968500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.968526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.968697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.968723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.968860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.968893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.969038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.969064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.969231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.969257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.969400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.969426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.969573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.969599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.969770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.969797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.969948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.969975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.970149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.970175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.970333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.970359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.691 [2024-07-16 00:28:47.970528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.691 [2024-07-16 00:28:47.970555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.691 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.970705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.970731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.970893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.970920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.971091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.971117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.971267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.971292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.971432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.971458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.971625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.971651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.971812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.971838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.971996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.972027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.972171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.972197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.972364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.972390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.972535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.972561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.972711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.972737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.972915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.972942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.973132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.973158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.973358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.973384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.973542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.973568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.973741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.973767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.973922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.973949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.974098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.974124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.974309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.974335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.974480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.974506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.974661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.974687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.974858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.974888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.975069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.975095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.975255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.975281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.975455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.975481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.975636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.975662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.975841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.975868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.976020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.976046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.976203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.976229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.976375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.976400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.976545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.976572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.976733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.976759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.976910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.976936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.977114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.977140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.977299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.977326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.977492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.977517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.977663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.977690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.977849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.977889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.978041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.978067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.692 qpair failed and we were unable to recover it. 00:24:56.692 [2024-07-16 00:28:47.978216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.692 [2024-07-16 00:28:47.978246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.978400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.978426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.978572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.978598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.978736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.978762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.978928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.978955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.979126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.979151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.979292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.979318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.979489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.979522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.979683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.979709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.979885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.979912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.980102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.980128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.980271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.980297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.980445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.980472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.980632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.980658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.980843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.980869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.981057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.981083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.981242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.981268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.981448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.981474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.981641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.981666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.981858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.981893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.982037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.982064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.982239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.982264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.982403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.982429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.982572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.982598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.982767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.982792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.982945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.982973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.983113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.983139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.983341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.983367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.983537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.983563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.983704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.983730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.983895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.983922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.984077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.984103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.984243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.984269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.984451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.693 [2024-07-16 00:28:47.984478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.693 qpair failed and we were unable to recover it. 00:24:56.693 [2024-07-16 00:28:47.984684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.984711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.984855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.984887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.985049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.985074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.985243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.985269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.985427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.985454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.985619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.985645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.985792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.985818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.986003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.986029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.986167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.986202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.986377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.986402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.986574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.986600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.986755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.986781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.986947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.986974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.987123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.987154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.987296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.987322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.987480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.987506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.987679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.987705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.987874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.987905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.988075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.988101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.988272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.988297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.988459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.988486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.988643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.988669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.988854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.988894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.989050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.989078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.989246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.989273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.989432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.989458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.989596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.989623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.989799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.989826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.989989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.990015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.990189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.990215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.990397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.990423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.990590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.990616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.990796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.990823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.990976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.991003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.991146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.991172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.991312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.991339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.991493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.991519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.991695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.991721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.991872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.991906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.992077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.992103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.992285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.694 [2024-07-16 00:28:47.992311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.694 qpair failed and we were unable to recover it. 00:24:56.694 [2024-07-16 00:28:47.992470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.992496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.992669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.992697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.992846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.992873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.993041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.993067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.993215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.993241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.993417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.993450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.993596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.993624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.993824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.993849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.994023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.994049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.994205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.994232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.994412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.994439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.994586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.994612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.994764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.994794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.994973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.995002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.995146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.995172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.995340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.995366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.995519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.995546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.995683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.995708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.995852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.995889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.996033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.996059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.996224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.996250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.996393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.996419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.996592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.996624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.996767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.996793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.996941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.996967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.997138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.997164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.997320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.997347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.997488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.997514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.997674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.997700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.997835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.997861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.998044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.998071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.998245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.998271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.998443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.998468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.998608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.998634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.998791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.998818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.999016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.999042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.999200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.999225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.999401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.999426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.999569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.999595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.999744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.999771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:47.999956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.695 [2024-07-16 00:28:47.999983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.695 qpair failed and we were unable to recover it. 00:24:56.695 [2024-07-16 00:28:48.000173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.000202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.000349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.000375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.000545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.000571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.000734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.000760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.000950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.000976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.001117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.001143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.001316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.001343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.001516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.001542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.001695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.001722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.001858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.001898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.002068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.002093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.002274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.002304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.002484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.002511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.002647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.002673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.002814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.002841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.003040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.003066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.003248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.003274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.003420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.003446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.003593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.003619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.003757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.003783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.003936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.003962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.004135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.004161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.004303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.004329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.004496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.004522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.004699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.004725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.004872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.004906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.005072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.005098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.005246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.005272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.005416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.005442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.005580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.005606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.005762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.005788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.005934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.005961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.006106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.006133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.006295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.006321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.006466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.006492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.006641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.006668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.006848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.006874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.007025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.007051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.007204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.007230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.007371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.007397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.696 [2024-07-16 00:28:48.007550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.696 [2024-07-16 00:28:48.007577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.696 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.007726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.007752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.007927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.007954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.008094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.008121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.008323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.008349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.008495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.008520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.008672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.008697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.008896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.008931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.009077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.009104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.009294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.009320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.009497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.009523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.009673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.009703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.009848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.009890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.010057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.010083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.010264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.010291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.010450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.010477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.010628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.010655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.010825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.010852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.011004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.011031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.011182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.011209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.011355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.011382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.011526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.011552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.011703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.011729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.011901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.011928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.012105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.012132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.012298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.012325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.012501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.012527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.012688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.012714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.012886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.012913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.013053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.013079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.013229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.013255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.013427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.013453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.013596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.013622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.013792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.013819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.014002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.014029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.014179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.014205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.014349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.014375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.014534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.014560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.014720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.014747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.014927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.014954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.015117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.015143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.697 [2024-07-16 00:28:48.015289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.697 [2024-07-16 00:28:48.015317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.697 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.015489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.015515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.015660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.015685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.015851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.015886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.016078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.016104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.016283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.016309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.016483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.016508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.016673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.016699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.016839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.016865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.017050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.017077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.017218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.017250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.017422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.017448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.017616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.017642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.017819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.017845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.017995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.018021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.018162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.018190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.018326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.018352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.018493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.018520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.018669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.018695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.018872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.018904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.019044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.019070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.019254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.019281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.019452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.019478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.019644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.019670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.019846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.019872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.020033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.020060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.020205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.020235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.020395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.020421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.020596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.020622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.020791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.020817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.020973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.021001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.021175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.021201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.021352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.021378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.021545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.021571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.021709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.021735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.698 [2024-07-16 00:28:48.021931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.698 [2024-07-16 00:28:48.021958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.698 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.022099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.022125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.022301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.022328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.022507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.022533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.022677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.022704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.022886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.022912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.023052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.023078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.023252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.023278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.023439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.023465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.023626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.023652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.023799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.023825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.023975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.024002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.024157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.024194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.024337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.024364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.024530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.024556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.024702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.024731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.024886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.024914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.025069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.025096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.025278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.025304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.025478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.025505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.025676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.025702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.025883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.025911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.026087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.026113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.026259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.026285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.026458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.026483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.026654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.026680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.026845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.026886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.027042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.027067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.027214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.027240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.027417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.027444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.027619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.027644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.027813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.027839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.028015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.028042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.028187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.028213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.028351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.028378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.028551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.028577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.028748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.028774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.028933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.028960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.029142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.029167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.029342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.029368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.029515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.029541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.029685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.699 [2024-07-16 00:28:48.029711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.699 qpair failed and we were unable to recover it. 00:24:56.699 [2024-07-16 00:28:48.029898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.029926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.030076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.030102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.030265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.030291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.030431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.030457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.030606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.030632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.030806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.030832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.030976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.031003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.031145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.031170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.031311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.031337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.031504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.031530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.031684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.031711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.031858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.031893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.032059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.032086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.032227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.032258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.032422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.032448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.032634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.032661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.032832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.032858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.033034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.033060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.033209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.033235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.033404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.033431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.033569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.033595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.033742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.033769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.033938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.033965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.034103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.034129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.034265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.034291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.034445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.034471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.034642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.034669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.034874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.034905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.035060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.035085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.035229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.035255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.035409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.035434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.035624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.035650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.035796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.035823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.036002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.036028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.036184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.036210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.036349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.036375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.036570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.036595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.036738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.036764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.036909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.036936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.037083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.037109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.700 [2024-07-16 00:28:48.037287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.700 [2024-07-16 00:28:48.037313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.700 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.037454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.037480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.037657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.037682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.037831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.037858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.038007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.038033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.038193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.038219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.038356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.038382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.038555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.038580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.038744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.038770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.038951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.038977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.039135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.039161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.039332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.039358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.039505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.039532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.039674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.039704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.039885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.039911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.040106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.040132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.040289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.040315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.040487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.040513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.040688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.040713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.040860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.040902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.041067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.041094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.041269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.041296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.041443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.041468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.041630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.041656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.041822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.041848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.042006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.042033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.042213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.042239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.042392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.042418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.042562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.042589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.042767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.042793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.042934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.042960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.043121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.043147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.043324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.043350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.043492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.043518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.043687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.043713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.043858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.043889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.044029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.044055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.044212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.044239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.044408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.044434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.044577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.044603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.044772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.044798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.701 [2024-07-16 00:28:48.044978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.701 [2024-07-16 00:28:48.045004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.701 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.045167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.045193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.045363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.045389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.045560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.045586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.045729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.045755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.045904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.045931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.046094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.046120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.046291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.046316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.046494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.046521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.046661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.046687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.046859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.046890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.047038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.047065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.047206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.047236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.047376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.047402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.047571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.047596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.047768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.047794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.047964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.047991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.048161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.048186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.048372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.048397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.048536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.048562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.048736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.048761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.048921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.048948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.049123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.049150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.049324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.049350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.049519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.049544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.049696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.049721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.049867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.049898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.050106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.050133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.050280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.050305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.050446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.050472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.050630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.050656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.050832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.050858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.051016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.051042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.051216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.051242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.051420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.051446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.051595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.051621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.702 qpair failed and we were unable to recover it. 00:24:56.702 [2024-07-16 00:28:48.051801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.702 [2024-07-16 00:28:48.051826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.051987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.052014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.052169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.052195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.052344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.052369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.052524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.052551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.052726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.052752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.052928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.052955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.053094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.053119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.053287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.053312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.053464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.053489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.053680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.053705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.053884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.053910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.054070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.054096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.054245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.054271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.054411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.054437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.054608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.054634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.054803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.054835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.054990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.055016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.055181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.055207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.055347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.055373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.055523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.055549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.055702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.055727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.055891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.055917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.056094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.056120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.056300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.056325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.056481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.056507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.056656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.056683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.056823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.056850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.057017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.057044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.703 [2024-07-16 00:28:48.057202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.057233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:56.703 [2024-07-16 00:28:48.057375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.703 [2024-07-16 00:28:48.057401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.703 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.703 [2024-07-16 00:28:48.057603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.057629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.057774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.057801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.057951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.057978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.058150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.058186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.058335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.058361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.058523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.058549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.058702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.058727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.058888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.058914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.703 qpair failed and we were unable to recover it. 00:24:56.703 [2024-07-16 00:28:48.059101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.703 [2024-07-16 00:28:48.059128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.059277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.059303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.059449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.059480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.059634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.059660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.059822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.059848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.060001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.060029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.060184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.060209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.060385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.060414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.060595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.060621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.060759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.060785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.060928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.060956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.061092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.061118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.061296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.061322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.061510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.061536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.061690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.061724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.061905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.061932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.062140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.062167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.062352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.062378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.062546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.062573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.062744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.062770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.062960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.062986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.063127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.063153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.063329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.063355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.063508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.063534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.063677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.063703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.063881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.063907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.064075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.064101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.064253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.064278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.064439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.064465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.064645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.064671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.064856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.064897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.065034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.065060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.065235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.065260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.065413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.065446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.065621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.065648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.065808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.065835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.066005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.066031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.066172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.066196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.066374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.066399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.066573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.066599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.066748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.066774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.704 [2024-07-16 00:28:48.066955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.704 [2024-07-16 00:28:48.066982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.704 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.067151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.067180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.067353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.067378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.067527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.067555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.067706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.067741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.067937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.067963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.068106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.068132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.068329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.068355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.068535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.068561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.068708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.068734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.068902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.068929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.069100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.069126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.069284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.069310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.069469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.069495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.069646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.069672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.069848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.069898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.070049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.070075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.070221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.070246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.070453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.070479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.070640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.070667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.070835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.070861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.071070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.071096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.071266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.071292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.071440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.071467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.071655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.071681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.071855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.071897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.072059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.072085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.072226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.072260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.072439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.072465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.072610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.072636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.072785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.072811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.072993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.073019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.073186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.073212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.073356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.073382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.073525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.073551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.073693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.073720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 [2024-07-16 00:28:48.073869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.073912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.705 [2024-07-16 00:28:48.074056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.074083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:56.705 [2024-07-16 00:28:48.074233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.705 [2024-07-16 00:28:48.074259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.705 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.705 [2024-07-16 00:28:48.074456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.705 [2024-07-16 00:28:48.074487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.705 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.074635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.074661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.074821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.074847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.075002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.075028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.075193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.075219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.075418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.075444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.075593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.075618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.075779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.075804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.076007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.076033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.076173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.076199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.076379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.076405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.076548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.076573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.076767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.076792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.076959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.076986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.077154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.077181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.077356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.077382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.077557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.077583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.077735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.077761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.077932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.077958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.078119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.078144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.078425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.078451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.078622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.078647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.078798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.078824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.079011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.079037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.079175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.079200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.079390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.079416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.079588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.079614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.079765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.079791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.079972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.079998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.080146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.080172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.080320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.080346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.080505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.080531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.080727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.080753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.080926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.080953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.081123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.081149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.706 [2024-07-16 00:28:48.081334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.706 [2024-07-16 00:28:48.081360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.706 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.081497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.081523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.081696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.081722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.081866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.081906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.082080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.082106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.082307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.082337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.082496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.082522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.082663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.082689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.082864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.082895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.083042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.083068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.083216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.083242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.083414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.083440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.083610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.083636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.083808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.083834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.083997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.084024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.084175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.084201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.084363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.084389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.084560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.084585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.084731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.084758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.084968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.084995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.085158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.085184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.085365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.085390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.085563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.085589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.085736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.085764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.085953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.085980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.086238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.086265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.086437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.086462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.086635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.086661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.086808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.086833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.087023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.087049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.087193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.087220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.087397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.087423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.087605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.087631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.087782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.087808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.087957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.087983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.088161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.088187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.088375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.088401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.088539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.088565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.088714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.088740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.088889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.088915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.089064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.089091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.707 qpair failed and we were unable to recover it. 00:24:56.707 [2024-07-16 00:28:48.089293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.707 [2024-07-16 00:28:48.089319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.089492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.089518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.089727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.089753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.089909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.089938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.090109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.090135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.090291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.090318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.090524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.090550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.090694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.090720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.090891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.090917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.091083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.091109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.091282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.091308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.091447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.091472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.091654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.091679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.091822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.091848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.092002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.092028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.092197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.092223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.092377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.092403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.092580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.092606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.092765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.092791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.092942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.092969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.093108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.093134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.093322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.093349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.093486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.093512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.093672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.093698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.093861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.093892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.094047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.094073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.094220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.094246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.094423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.094449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.094625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.094651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.094822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.094848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.095030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.095057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.095237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.095268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.095438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.095464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.095626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.095652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.095800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.095826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.096037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.096063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.096222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.096247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.096414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.096440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.096587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.096615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.096793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.096820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.096962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.096989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.097133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.097159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.097327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.708 [2024-07-16 00:28:48.097354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.708 qpair failed and we were unable to recover it. 00:24:56.708 [2024-07-16 00:28:48.097514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.097540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.097742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.097768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 Malloc0 00:24:56.709 [2024-07-16 00:28:48.097925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.097951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.098115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.098142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.709 [2024-07-16 00:28:48.098321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:56.709 [2024-07-16 00:28:48.098348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.709 [2024-07-16 00:28:48.098497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.098524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.709 [2024-07-16 00:28:48.098711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.098737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.098892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.098919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.099121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.099146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.099303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.099328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.099473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.099674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.099700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.099904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.099930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.100108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.100152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.100341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.100367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.100548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.100574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.100727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.100753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.100930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.100956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.101110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.101136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.101311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.101337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.101487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.101513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.101558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.709 [2024-07-16 00:28:48.101685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.101709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.101861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.101906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.102051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.102077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.102249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.102275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.102448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.102473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.102640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.102670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.102847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.102874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.103050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.103076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.103283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.103309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.103487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.103512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.103685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.103710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.103846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.103871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.104080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.104105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.104252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.104278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.104460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.104486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.104635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.104662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.104814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.104840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.105016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.105042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.105187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.105213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.105403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.105428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.105572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.105597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.709 qpair failed and we were unable to recover it. 00:24:56.709 [2024-07-16 00:28:48.105773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.709 [2024-07-16 00:28:48.105798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.105957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.105983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.106157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.106184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.106364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.106390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.106546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.106572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.106739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.106765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.106912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.106938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.107093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.107118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.107273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.107298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.107447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.107473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.107617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.107644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.107851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.107883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.108028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.108054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.108222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.108248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.108396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.108422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.108565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.108592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.108784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.108809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.108967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.108994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.109179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.109205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.109384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.109410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.109584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.109610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.710 [2024-07-16 00:28:48.109760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.109786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.710 [2024-07-16 00:28:48.109933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.109959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.710 [2024-07-16 00:28:48.110094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.710 [2024-07-16 00:28:48.110121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.110291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.110317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.110468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.110494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.110657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.110683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.110856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.110892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.111058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.111085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.111266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.111292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.111436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.111461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.111609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.111634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.111799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.111824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.111976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.112002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.112163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.112189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.112366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.112391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.112542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.112567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.710 [2024-07-16 00:28:48.112742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.710 [2024-07-16 00:28:48.112768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.710 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.112955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.113002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.113178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.113204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.113348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.113373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.113516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.113543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.113697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.113722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.113901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.113927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.114072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.114097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.114266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.114292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.114470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.114496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.114672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.114699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.114902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.114928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.115072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.115102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.115307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.115332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.115530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.115555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.115717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.115743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.115889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.115916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.116093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.116119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.116261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.116287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.116429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.116456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.116605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.116631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.116782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.116807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.116977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.117004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.117206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.117232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.117404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.117430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.117605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.117630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.711 [2024-07-16 00:28:48.117790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.117816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.711 [2024-07-16 00:28:48.117968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.117995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.711 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.711 [2024-07-16 00:28:48.118172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.118198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.118354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.118380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.118556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.118581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.118727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.118752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.118902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.118929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.119102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.119127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.119269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.119295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.119448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.119474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.119623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.119649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.119835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.119861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.120067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.120093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.120244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.120270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.120437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.120462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.120636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.120661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.120819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.120845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.121005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.121031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.121186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.121212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.121362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.711 [2024-07-16 00:28:48.121388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.711 qpair failed and we were unable to recover it. 00:24:56.711 [2024-07-16 00:28:48.121536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.121562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.121737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.121763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.121932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.121958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.122124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.122149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.122326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.122351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.122532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.122559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.122734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.122759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.122911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.122937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.123090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.123116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.123269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.123295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.123441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.123467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.123640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.123666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.123826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.123851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.124023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.124049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.124202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.124228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.124398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.124424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.124567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.124593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.124786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.124812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.124999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.125026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.125175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.125201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.125376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.125401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.125563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.125589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.125768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.125794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.712 [2024-07-16 00:28:48.125989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.126015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.712 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.712 [2024-07-16 00:28:48.126189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.126215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.126382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.126408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.126569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.126594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.126769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.126796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.126966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.126992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.127142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.127172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.127347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.127372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.127523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.127550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.127701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.127727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.127879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.127906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.128085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.128111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.128289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.128314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.128484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.128510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.128660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.128686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.128848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.128874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.129061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.129087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.129273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.129299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.129471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.129496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.129694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.712 [2024-07-16 00:28:48.129720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cfc000b90 with addr=10.0.0.2, port=4420 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.712 [2024-07-16 00:28:48.129809] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.712 [2024-07-16 00:28:48.132403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.712 [2024-07-16 00:28:48.132582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.712 [2024-07-16 00:28:48.132609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.712 [2024-07-16 00:28:48.132625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.712 [2024-07-16 00:28:48.132637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.712 [2024-07-16 00:28:48.132674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.712 qpair failed and we were unable to recover it. 00:24:56.713 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.713 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:56.713 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.713 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.713 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.713 00:28:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2390416 00:24:56.713 [2024-07-16 00:28:48.142168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.142334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.142361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.142376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.142388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.142417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.152211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.152359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.152385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.152400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.152411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.152453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.162181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.162347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.162377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.162393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.162420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.162448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.172210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.172366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.172392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.172407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.172418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.172462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.182201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.182371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.182396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.182409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.182436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.182464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.192235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.192389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.192415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.192429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.192441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.192485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.202218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.202378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.202403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.202417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.202428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.202462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.212227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.212404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.212430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.212458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.212470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.212498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.222315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.222470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.222497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.222515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.222528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.222574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.232339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.232491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.232516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.232531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.232543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.232586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.242338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.242502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.242528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.242542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.242554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.242583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.252359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.252509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.252540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.252554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.252566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.252595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.262405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.262556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.262581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.262595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.262607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.262636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.272402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.272550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.272576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.272589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.272601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.272630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.713 [2024-07-16 00:28:48.282438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.713 [2024-07-16 00:28:48.282604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.713 [2024-07-16 00:28:48.282629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.713 [2024-07-16 00:28:48.282643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.713 [2024-07-16 00:28:48.282655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.713 [2024-07-16 00:28:48.282684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.713 qpair failed and we were unable to recover it. 00:24:56.973 [2024-07-16 00:28:48.292588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.973 [2024-07-16 00:28:48.292736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.973 [2024-07-16 00:28:48.292776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.973 [2024-07-16 00:28:48.292789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.973 [2024-07-16 00:28:48.292801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.973 [2024-07-16 00:28:48.292850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.973 qpair failed and we were unable to recover it. 00:24:56.973 [2024-07-16 00:28:48.302491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.973 [2024-07-16 00:28:48.302643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.973 [2024-07-16 00:28:48.302668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.973 [2024-07-16 00:28:48.302683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.973 [2024-07-16 00:28:48.302696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.973 [2024-07-16 00:28:48.302726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.973 qpair failed and we were unable to recover it. 00:24:56.973 [2024-07-16 00:28:48.312517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.973 [2024-07-16 00:28:48.312680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.973 [2024-07-16 00:28:48.312705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.973 [2024-07-16 00:28:48.312719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.973 [2024-07-16 00:28:48.312731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.973 [2024-07-16 00:28:48.312760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.973 qpair failed and we were unable to recover it. 00:24:56.973 [2024-07-16 00:28:48.322670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.973 [2024-07-16 00:28:48.322929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.973 [2024-07-16 00:28:48.322955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.973 [2024-07-16 00:28:48.322969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.973 [2024-07-16 00:28:48.322981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.973 [2024-07-16 00:28:48.323010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.973 qpair failed and we were unable to recover it. 00:24:56.973 [2024-07-16 00:28:48.332671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.973 [2024-07-16 00:28:48.332842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.973 [2024-07-16 00:28:48.332866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.973 [2024-07-16 00:28:48.332905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.973 [2024-07-16 00:28:48.332918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.973 [2024-07-16 00:28:48.332947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.973 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.342868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.343065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.343098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.343113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.343125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.343154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.352676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.352820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.352846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.352871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.352891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.352921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.362716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.362895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.362921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.362935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.362947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.362976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.372749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.372899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.372924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.372938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.372950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.372979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.382732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.382903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.382929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.382943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.382960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.382992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.392787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.392947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.392972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.392987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.392998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.393027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.402783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.402955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.402981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.402994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.403006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.403035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.412821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.413006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.413032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.413045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.413057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.413085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.422844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.423001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.423027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.423041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.423053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.423081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.432895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.433066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.433092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.433106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.433118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.433147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.442873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.443046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.443071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.443085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.443096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.443126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.452912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.453064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.453088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.453102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.453114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.453142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.462932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.463086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.463111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.463125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.463137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.463165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.472979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.473130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.974 [2024-07-16 00:28:48.473156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.974 [2024-07-16 00:28:48.473181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.974 [2024-07-16 00:28:48.473195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.974 [2024-07-16 00:28:48.473241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.974 qpair failed and we were unable to recover it. 00:24:56.974 [2024-07-16 00:28:48.483002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.974 [2024-07-16 00:28:48.483163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.483188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.483202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.483214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.483243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.493008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.493165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.493191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.493205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.493217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.493246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.503031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.503178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.503203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.503217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.503229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.503258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.513061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.513214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.513238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.513253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.513264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.513293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.523102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.523260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.523284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.523298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.523310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.523339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.533141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.533329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.533357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.533386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.533398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.533427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.543170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.543320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.543345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.543359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.543371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.543412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.553183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.553335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.553360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.553374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.553386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.553429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.563233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.563403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.563429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.563448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.563461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.563490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.573234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.573383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.573408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.573421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.573433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.573463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:56.975 [2024-07-16 00:28:48.583321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:56.975 [2024-07-16 00:28:48.583480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:56.975 [2024-07-16 00:28:48.583506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:56.975 [2024-07-16 00:28:48.583525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:56.975 [2024-07-16 00:28:48.583539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:56.975 [2024-07-16 00:28:48.583583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.975 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.593301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.593458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.593484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.593499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.593511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.593556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.603334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.603505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.603531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.603544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.603556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.603585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.613332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.613482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.613508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.613522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.613533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.613563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.623375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.623549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.623574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.623587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.623599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.623628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.633406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.633558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.633583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.633597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.633609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.633638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.643500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.643663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.643703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.643717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.643728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.643771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.653550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.653721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.653752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.653781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.653793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.653822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.663531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.663696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.663721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.663735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.663747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.663775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.673521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.673669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.673693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.673708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.673720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.673749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.683576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.683756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.683782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.683811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.683822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.683865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.693594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.236 [2024-07-16 00:28:48.693750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.236 [2024-07-16 00:28:48.693775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.236 [2024-07-16 00:28:48.693789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.236 [2024-07-16 00:28:48.693801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.236 [2024-07-16 00:28:48.693836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-16 00:28:48.703610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.703763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.703788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.703802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.703814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.703842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.713660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.713827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.713852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.713866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.713885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.713916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.723687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.723847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.723872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.723894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.723907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.723935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.733689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.733839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.733864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.733884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.733898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.733927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.743722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.743866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.743903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.743919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.743930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.743960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.753758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.753973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.753999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.754013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.754025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.754054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.763810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.763970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.763995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.764009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.764021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.764050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.773827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.774051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.774078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.774093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.774108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.774141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.783842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.783995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.784022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.784036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.784054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.784084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.793886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.794039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.794065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.794080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.794092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.794122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.803915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.804078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.804104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.804118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.804130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.804159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.813929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.814084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.814110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.814124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.814135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.814165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.823958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.824103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.824128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.824142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.824154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.824182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.833998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.834157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.834182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.237 [2024-07-16 00:28:48.834196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.237 [2024-07-16 00:28:48.834207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.237 [2024-07-16 00:28:48.834236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-16 00:28:48.844086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.237 [2024-07-16 00:28:48.844282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.237 [2024-07-16 00:28:48.844306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.238 [2024-07-16 00:28:48.844320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.238 [2024-07-16 00:28:48.844331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.238 [2024-07-16 00:28:48.844360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.498 [2024-07-16 00:28:48.854109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.498 [2024-07-16 00:28:48.854269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.498 [2024-07-16 00:28:48.854294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.498 [2024-07-16 00:28:48.854309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.498 [2024-07-16 00:28:48.854320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.498 [2024-07-16 00:28:48.854349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.498 qpair failed and we were unable to recover it. 00:24:57.498 [2024-07-16 00:28:48.864076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.498 [2024-07-16 00:28:48.864225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.864250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.864264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.864275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.864304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.874146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.874317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.874343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.874376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.874390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.874419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.884149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.884310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.884335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.884348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.884360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.884388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.894181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.894336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.894362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.894376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.894388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.894435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.904199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.904345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.904370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.904385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.904396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.904437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.914209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.914355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.914380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.914394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.914406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.914434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.924253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.924415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.924439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.924453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.924465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.924494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.934288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.934440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.934466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.934480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.934492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.934532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.944345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.944501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.944526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.944540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.944552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.944581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.954367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.954532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.954557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.954571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.954583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.954627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.964377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.964538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.964563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.964582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.964594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.964623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.974423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.974568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.974593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.974607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.974619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.974663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.984451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.984627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.984653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.984683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.984694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.984749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:48.994441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:48.994587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:48.994613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:48.994627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.499 [2024-07-16 00:28:48.994639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.499 [2024-07-16 00:28:48.994668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.499 qpair failed and we were unable to recover it. 00:24:57.499 [2024-07-16 00:28:49.004513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.499 [2024-07-16 00:28:49.004677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.499 [2024-07-16 00:28:49.004702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.499 [2024-07-16 00:28:49.004716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.004744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.004772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.014584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.014729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.014755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.014769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.014781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.014810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.024539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.024687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.024713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.024726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.024738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.024782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.034579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.034788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.034827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.034840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.034852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.034904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.044605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.044765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.044790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.044804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.044816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.044845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.054619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.054791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.054822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.054837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.054863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.054901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.064637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.064782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.064807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.064820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.064833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.064861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.074687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.074861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.074896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.074912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.074924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.074965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.084737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.084907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.084933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.084947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.084959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.084989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.094742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.094900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.094925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.094939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.094951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.094985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.500 [2024-07-16 00:28:49.104752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.500 [2024-07-16 00:28:49.104907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.500 [2024-07-16 00:28:49.104943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.500 [2024-07-16 00:28:49.104956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.500 [2024-07-16 00:28:49.104968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.500 [2024-07-16 00:28:49.105008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.500 qpair failed and we were unable to recover it. 00:24:57.761 [2024-07-16 00:28:49.114780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.114950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.114976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.114990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.115002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.115031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.124824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.124996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.125022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.125036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.125048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.125077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.134829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.134999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.135023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.135037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.135048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.135078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.144858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.145018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.145050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.145065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.145077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.145106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.154914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.155124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.155150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.155169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.155184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.155229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.164955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.165135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.165160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.165174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.165186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.165227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.174952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.175101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.175126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.175140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.175152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.175181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.184985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.185130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.185156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.185170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.185188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.185218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.195008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.195158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.195183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.195197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.195209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.195238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.205077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.205264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.205304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.205318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.205329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.205358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.215080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.215227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.215251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.215265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.215277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.215305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.225080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.225230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.225255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.225269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.225281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.225310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.235155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.235312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.235337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.235351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.235363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.235421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.245173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.245335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.245361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.762 [2024-07-16 00:28:49.245375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.762 [2024-07-16 00:28:49.245387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.762 [2024-07-16 00:28:49.245415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.762 qpair failed and we were unable to recover it. 00:24:57.762 [2024-07-16 00:28:49.255179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.762 [2024-07-16 00:28:49.255368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.762 [2024-07-16 00:28:49.255393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.255407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.255419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.255448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.265222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.265373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.265398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.265412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.265424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.265452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.275226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.275373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.275398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.275412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.275432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.275461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.285271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.285479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.285504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.285518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.285530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.285559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.295324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.295474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.295499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.295514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.295525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.295569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.305346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.305521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.305546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.305560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.305572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.305600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.315366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.315518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.315543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.315574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.315587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.315631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.325393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.325550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.325576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.325590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.325601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.325630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.335392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.335542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.335566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.335580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.335592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.335621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.345439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.345586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.345611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.345625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.345637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.345665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.355471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.355619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.355643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.355658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.355670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.355713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:57.763 [2024-07-16 00:28:49.365521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:57.763 [2024-07-16 00:28:49.365682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.763 [2024-07-16 00:28:49.365707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:57.763 [2024-07-16 00:28:49.365727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.763 [2024-07-16 00:28:49.365739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:57.763 [2024-07-16 00:28:49.365783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.763 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.375604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.375769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.375795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.375810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.375822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.375851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.385554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.385705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.385730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.385745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.385757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.385800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.395590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.395739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.395763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.395777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.395789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.395818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.405628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.405787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.405811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.405825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.405852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.405887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.415628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.415786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.415811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.415825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.415836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.415865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.425685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.425855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.425885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.425901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.425913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.425942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.435675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.435818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.435843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.435856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.435868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.435910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.445716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.445887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.445913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.445928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.445939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.445968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.455748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.455903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.455933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.455949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.455961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.455990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.465768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.465926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.465952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.465966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.465979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.466007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.475790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.475935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.475960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.024 [2024-07-16 00:28:49.475974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.024 [2024-07-16 00:28:49.475986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.024 [2024-07-16 00:28:49.476014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.024 qpair failed and we were unable to recover it. 00:24:58.024 [2024-07-16 00:28:49.485951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.024 [2024-07-16 00:28:49.486136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.024 [2024-07-16 00:28:49.486161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.486175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.486186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.486215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.495847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.496007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.496033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.496047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.496059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.496094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.505922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.506105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.506130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.506144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.506156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.506200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.515990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.516206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.516231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.516245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.516256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.516300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.525957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.526118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.526143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.526157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.526169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.526211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.535984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.536137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.536162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.536180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.536192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.536237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.546036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.546187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.546218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.546233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.546245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.546274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.556021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.556167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.556192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.556206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.556218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.556262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.566111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.566303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.566343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.566357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.566369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.566412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.576136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.576386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.576412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.576429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.576441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.576487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.586170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.586341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.586367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.586381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.586408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.586443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.596162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.596340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.596365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.596393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.596406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.596434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.606238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.606443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.606468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.606481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.606493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.606522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.616227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.616380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.616405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.616420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.616447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.025 [2024-07-16 00:28:49.616475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.025 qpair failed and we were unable to recover it. 00:24:58.025 [2024-07-16 00:28:49.626316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.025 [2024-07-16 00:28:49.626468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.025 [2024-07-16 00:28:49.626493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.025 [2024-07-16 00:28:49.626507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.025 [2024-07-16 00:28:49.626519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.026 [2024-07-16 00:28:49.626563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.026 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.636396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.636558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.636583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.636598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.636610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.636653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.646337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.646537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.646576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.646590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.646601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.646645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.656326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.656479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.656505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.656519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.656531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.656560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.666357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.666555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.666580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.666594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.666606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.666635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.676415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.676563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.676588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.676603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.676621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.676665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.686449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.686620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.686645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.686659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.686671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.686715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.696449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.696596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.696622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.696636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.696648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.696677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.706542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.706685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.706710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.706724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.706736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.706776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.716513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.716663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.716688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.716702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.716714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.716743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.726582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.726745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.726773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.726787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.726817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.726847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.736583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.736731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.736757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.736771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.736783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.736812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-07-16 00:28:49.746597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.285 [2024-07-16 00:28:49.746744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.285 [2024-07-16 00:28:49.746769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.285 [2024-07-16 00:28:49.746783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.285 [2024-07-16 00:28:49.746795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.285 [2024-07-16 00:28:49.746838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.756630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.756778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.756804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.756818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.756830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.756870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.766669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.766854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.766893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.766921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.766935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.766965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.776708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.776875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.776908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.776923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.776935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.776964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.786706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.786850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.786894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.786909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.786921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.786950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.796741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.796937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.796963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.796981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.796993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.797025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.806862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.807067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.807095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.807109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.807121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.807152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.816827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.816995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.817022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.817036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.817048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.817077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.826819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.826984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.827010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.827024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.827036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.827065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.836869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.837045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.837070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.837084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.837096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.837125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.847018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.847206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.847248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.847263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.847275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.847319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.856933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.857110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.857141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.857156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.857168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.857201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.866965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.867158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.867183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.867197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.867209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.867237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.876987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.877140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.877166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.877179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.877191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.877220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.887042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.887233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.286 [2024-07-16 00:28:49.887258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.286 [2024-07-16 00:28:49.887272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.286 [2024-07-16 00:28:49.887284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.286 [2024-07-16 00:28:49.887313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-07-16 00:28:49.897050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.286 [2024-07-16 00:28:49.897204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.897229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.897245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.897258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.897295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.907100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.907268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.907294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.907308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.907334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.907363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.917134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.917303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.917327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.917341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.917353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.917396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.927150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.927313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.927338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.927351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.927363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.927392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.937162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.937351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.937376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.937390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.937402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.937430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.947211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.947384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.947414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.947429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.947441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.947470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.957324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.957493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.957519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.957533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.957559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.957587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.967337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.967530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.967570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.967584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.967595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.967639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.977267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.977414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.977439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.977453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.977465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.977494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.987322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.987472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.987497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.987511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.987524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.987558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:49.997345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:49.997489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:49.997515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:49.997529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:49.997541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:49.997571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:50.007367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:50.007530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:50.007555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:50.007569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:50.007582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:50.007613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-16 00:28:50.017541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.546 [2024-07-16 00:28:50.017753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.546 [2024-07-16 00:28:50.017782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.546 [2024-07-16 00:28:50.017797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.546 [2024-07-16 00:28:50.017809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.546 [2024-07-16 00:28:50.017841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.027453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.027602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.027629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.027643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.027655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.027686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.037493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.037648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.037683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.037698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.037711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.037740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.047544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.047753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.047778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.047793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.047805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.047835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.057522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.057683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.057708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.057723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.057735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.057764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.067573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.067722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.067748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.067763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.067775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.067804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.077551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.077695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.077721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.077735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.077754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.077784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.087605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.087794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.087819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.087833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.087845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.087874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.097618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.097782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.097807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.097821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.097833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.097862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.107657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.107815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.107841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.107855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.107874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.107914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.117658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.117816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.117841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.117855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.117867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.117906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.127699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.127866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.127898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.127913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.127925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.127954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.137717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.137893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.137917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.137931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.137943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.137971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.147776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.147941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.147966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.147980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.147992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.148021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-16 00:28:50.157901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.547 [2024-07-16 00:28:50.158088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.547 [2024-07-16 00:28:50.158113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.547 [2024-07-16 00:28:50.158127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.547 [2024-07-16 00:28:50.158139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.547 [2024-07-16 00:28:50.158168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.167849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.168074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.168100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.168119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.168132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.168161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.177844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.178034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.178060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.178074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.178086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.178115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.187868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.188026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.188052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.188065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.188077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.188106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.197927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.198074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.198100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.198114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.198126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.198155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.208014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.208219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.208258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.208272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.208283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.208326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.218000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.218200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.218225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.218238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.218250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.218292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.228095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.228258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.228282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.228295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.228307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.228350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.238090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.238239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.238264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.238278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.238290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.238318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.248121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.248310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.248350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.248364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.248375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.248418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.258053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.258202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.258227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.258247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.258260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.258289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.268228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.268397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.268422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.268438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.809 [2024-07-16 00:28:50.268453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.809 [2024-07-16 00:28:50.268482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-16 00:28:50.278125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.809 [2024-07-16 00:28:50.278284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.809 [2024-07-16 00:28:50.278309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.809 [2024-07-16 00:28:50.278323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.278335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.278363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.288185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.288354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.288380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.288394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.288420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.288449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.298217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.298370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.298395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.298409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.298421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.298463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.308231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.308418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.308443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.308457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.308468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.308497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.318251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.318402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.318427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.318441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.318452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.318481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.328298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.328458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.328483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.328497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.328509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.328537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.338430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.338624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.338650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.338664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.338676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.338704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.348337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.348490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.348521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.348536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.348548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.348577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.358420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.358608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.358633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.358647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.358659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.358688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.368394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.368560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.368586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.368599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.368611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.368640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.378410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.378564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.378589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.378602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.378615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.378643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.388513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.388679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.388703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.388717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.388729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.388779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.398480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.398641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.398666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.398681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.398693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.398722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.408531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.408698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.408723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.408737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.408749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.408779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-16 00:28:50.418547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:58.810 [2024-07-16 00:28:50.418734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:58.810 [2024-07-16 00:28:50.418759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:58.810 [2024-07-16 00:28:50.418773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:58.810 [2024-07-16 00:28:50.418785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:58.810 [2024-07-16 00:28:50.418814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.811 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.428584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.428752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.428776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.428790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.428803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.428847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.438679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.438842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.438895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.438914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.438926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.438957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.448625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.448829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.448855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.448868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.448888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.448918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.458639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.458786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.458810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.458824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.458836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.458865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.468743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.468935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.468976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.468994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.469006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.469035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.478692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.478850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.478881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.478898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.478921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.478951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.488741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.488934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.488962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.488977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.488989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.489019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.498895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.499072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.499097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.499111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.070 [2024-07-16 00:28:50.499123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.070 [2024-07-16 00:28:50.499153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-16 00:28:50.508780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.070 [2024-07-16 00:28:50.508936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.070 [2024-07-16 00:28:50.508962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.070 [2024-07-16 00:28:50.508976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.508988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.509017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.518815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.518972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.518997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.519011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.519023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.519052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.528842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.529016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.529042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.529056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.529068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.529096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.538865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.539032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.539058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.539072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.539084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.539113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.548897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.549045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.549071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.549085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.549097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.549126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.558935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.559104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.559129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.559142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.559154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.559183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.569004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.569160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.569185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.569205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.569217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.569258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.578975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.579142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.579168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.579182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.579208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.579237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.589012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.589161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.589186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.589201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.589213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.589260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.599048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.599201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.599226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.599241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.599253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.599300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.609105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.609268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.609294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.609308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.609320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.609348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.619101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.619251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.619276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.619290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.619302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.619331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.629151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.629304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.629330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.629344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.629356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.629400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.639253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.639408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.639433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.639447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.639460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.639489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.649192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.071 [2024-07-16 00:28:50.649352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.071 [2024-07-16 00:28:50.649378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.071 [2024-07-16 00:28:50.649392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.071 [2024-07-16 00:28:50.649404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.071 [2024-07-16 00:28:50.649434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-16 00:28:50.659210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.072 [2024-07-16 00:28:50.659379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.072 [2024-07-16 00:28:50.659405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.072 [2024-07-16 00:28:50.659424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.072 [2024-07-16 00:28:50.659438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.072 [2024-07-16 00:28:50.659468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-16 00:28:50.669319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.072 [2024-07-16 00:28:50.669499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.072 [2024-07-16 00:28:50.669523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.072 [2024-07-16 00:28:50.669538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.072 [2024-07-16 00:28:50.669549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.072 [2024-07-16 00:28:50.669592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-16 00:28:50.679254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.072 [2024-07-16 00:28:50.679406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.072 [2024-07-16 00:28:50.679431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.072 [2024-07-16 00:28:50.679446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.072 [2024-07-16 00:28:50.679458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.072 [2024-07-16 00:28:50.679487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.689398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.689573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.689608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.689622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.689634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.689663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.699373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.699525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.699550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.699564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.699575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.699604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.709347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.709497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.709522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.709535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.709547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.709576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.719501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.719650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.719689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.719703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.719714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.719758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.729410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.729570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.729595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.729609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.729620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.729649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.739536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.739683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.739708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.739722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.739734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.739763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.749478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.749629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.749659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.749674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.749687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.749732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.759483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.759634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.759659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.759672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.331 [2024-07-16 00:28:50.759685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.331 [2024-07-16 00:28:50.759714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.331 qpair failed and we were unable to recover it. 00:24:59.331 [2024-07-16 00:28:50.769563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.331 [2024-07-16 00:28:50.769728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.331 [2024-07-16 00:28:50.769753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.331 [2024-07-16 00:28:50.769768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.769779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.769808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.779560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.779739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.779766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.779780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.779796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.779841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.789584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.789757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.789783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.789797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.789809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.789845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.799609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.799778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.799805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.799819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.799847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.799883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.809675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.809868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.809904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.809922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.809934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.809964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.819674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.819874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.819908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.819923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.819935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.819965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.829672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.829820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.829846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.829861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.829873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.829911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.839822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.839982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.840013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.840028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.840041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.840070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.849751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.849917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.849943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.849957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.849968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.849998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.859754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.859917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.859943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.859957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.859969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.859997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.869801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.869951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.869977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.869992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.870004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.870032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.879823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.879973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.879998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.880011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.880028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.880058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.889912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.890072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.890098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.890112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.890123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.890152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.899888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.900070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.900095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.900109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.900121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.900149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.332 qpair failed and we were unable to recover it. 00:24:59.332 [2024-07-16 00:28:50.909905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.332 [2024-07-16 00:28:50.910057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.332 [2024-07-16 00:28:50.910083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.332 [2024-07-16 00:28:50.910097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.332 [2024-07-16 00:28:50.910108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.332 [2024-07-16 00:28:50.910137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.333 qpair failed and we were unable to recover it. 00:24:59.333 [2024-07-16 00:28:50.919930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.333 [2024-07-16 00:28:50.920086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.333 [2024-07-16 00:28:50.920112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.333 [2024-07-16 00:28:50.920125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.333 [2024-07-16 00:28:50.920137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.333 [2024-07-16 00:28:50.920166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.333 qpair failed and we were unable to recover it. 00:24:59.333 [2024-07-16 00:28:50.929992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.333 [2024-07-16 00:28:50.930166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.333 [2024-07-16 00:28:50.930191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.333 [2024-07-16 00:28:50.930219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.333 [2024-07-16 00:28:50.930231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.333 [2024-07-16 00:28:50.930259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.333 qpair failed and we were unable to recover it. 00:24:59.333 [2024-07-16 00:28:50.940000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.333 [2024-07-16 00:28:50.940173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.333 [2024-07-16 00:28:50.940198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.333 [2024-07-16 00:28:50.940226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.333 [2024-07-16 00:28:50.940238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.333 [2024-07-16 00:28:50.940266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.333 qpair failed and we were unable to recover it. 00:24:59.593 [2024-07-16 00:28:50.950074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.593 [2024-07-16 00:28:50.950265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.593 [2024-07-16 00:28:50.950304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.593 [2024-07-16 00:28:50.950318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.593 [2024-07-16 00:28:50.950330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.593 [2024-07-16 00:28:50.950358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:50.960142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:50.960308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:50.960333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:50.960347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:50.960358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:50.960401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:50.970085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:50.970246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:50.970271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:50.970285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:50.970303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:50.970332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:50.980109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:50.980266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:50.980291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:50.980304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:50.980316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:50.980360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:50.990168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:50.990324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:50.990349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:50.990363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:50.990375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:50.990404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.000158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.000314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.000338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.000353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.000365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.000393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.010238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.010408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.010433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.010447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.010459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.010502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.020264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.020428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.020454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.020483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.020495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.020545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.030247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.030395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.030421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.030436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.030448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.030477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.040282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.040460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.040487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.040501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.040513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.040558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.050358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.050547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.050586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.050601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.050612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.050655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.060365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.060517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.060542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.060562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.060576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.060605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.070346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.070494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.070519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.070533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.070545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.070573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.080438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.080585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.080610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.080624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.080636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.080665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.090446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.090609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.090634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.090648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.090676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.090705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.100461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.100622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.100650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.594 [2024-07-16 00:28:51.100664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.594 [2024-07-16 00:28:51.100690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.594 [2024-07-16 00:28:51.100719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.594 qpair failed and we were unable to recover it. 00:24:59.594 [2024-07-16 00:28:51.110474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.594 [2024-07-16 00:28:51.110628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.594 [2024-07-16 00:28:51.110654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.110668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.110680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.110709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.120617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.120779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.120804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.120817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.120829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.120871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.130554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.130712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.130737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.130751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.130763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.130791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.140532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.140681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.140704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.140718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.140730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.140758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.150602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.150753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.150786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.150802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.150813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.150846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.160600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.160746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.160772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.160785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.160797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.160826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.170644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.170810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.170836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.170853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.170865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.170900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.180656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.180813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.180839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.180857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.180869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.180910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.190773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.190931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.190957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.190971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.190983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.191019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.595 [2024-07-16 00:28:51.200717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.595 [2024-07-16 00:28:51.200865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.595 [2024-07-16 00:28:51.200898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.595 [2024-07-16 00:28:51.200913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.595 [2024-07-16 00:28:51.200925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.595 [2024-07-16 00:28:51.200955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.595 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.210780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.210979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.211005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.211019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.211031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.211060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.220766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.220940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.220965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.220979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.220991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.221019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.230930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.231096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.231121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.231135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.231147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.231176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.240814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.240971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.241002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.241017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.241028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.241057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.250862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.251030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.251055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.251068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.251080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.251109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.260859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.261020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.261046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.261060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.261072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.261101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.270918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.271110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.271136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.271150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.271162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.271206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.280941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.281093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.281117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.281131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.281142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.281177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.290963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.291118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.291144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.291158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.291169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.291198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.300993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.301145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.301170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.301184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.301196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.301224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.311021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.311170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.311195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.311209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.311220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.311263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.321049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.321202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.321227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.321241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.321253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.321282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.331125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.331294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.331319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.331348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.331360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.331402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.856 qpair failed and we were unable to recover it. 00:24:59.856 [2024-07-16 00:28:51.341122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.856 [2024-07-16 00:28:51.341314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.856 [2024-07-16 00:28:51.341339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.856 [2024-07-16 00:28:51.341353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.856 [2024-07-16 00:28:51.341365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.856 [2024-07-16 00:28:51.341393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.351115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.351268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.351292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.351306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.351319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.351347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.361265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.361427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.361452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.361466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.361477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.361508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.371190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.371390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.371415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.371429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.371446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.371475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.381203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.381355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.381380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.381394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.381406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.381435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.391328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.391507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.391534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.391563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.391574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.391621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.401266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.401411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.401437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.401452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.401464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.401505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.411300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.411463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.411488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.411502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.411514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.411543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.421341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.421528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.421553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.421567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.421579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.421608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.431380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.431567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.431592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.431620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.431632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.431660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.441378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.441522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.441546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.441559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.441571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.441601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.451448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.451651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.451676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.451690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.451702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.451730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:24:59.857 [2024-07-16 00:28:51.461442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:59.857 [2024-07-16 00:28:51.461597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:59.857 [2024-07-16 00:28:51.461621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:59.857 [2024-07-16 00:28:51.461640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:59.857 [2024-07-16 00:28:51.461654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:24:59.857 [2024-07-16 00:28:51.461698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:59.857 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.471452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.471641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.471666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.471679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.471691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.471720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.481523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.481675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.481700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.481714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.481725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.481754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.491667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.491867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.491915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.491930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.491942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.491972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.501561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.501717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.501742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.501756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.501767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.501810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.511621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.511777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.511802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.511816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.511828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.511857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.521609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.521764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.521789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.521803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.521815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.521844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.531680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.531848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.531873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.531896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.531908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.531937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.541723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.541871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.541903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.541917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.541929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.541958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.551741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.551895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.551925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.551940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.551952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.551981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.561735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.561897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.561923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.561936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.561948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.561977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.571797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.571969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.571994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.572008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.572020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.572049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-16 00:28:51.581830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.117 [2024-07-16 00:28:51.582033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.117 [2024-07-16 00:28:51.582057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.117 [2024-07-16 00:28:51.582071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.117 [2024-07-16 00:28:51.582083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.117 [2024-07-16 00:28:51.582111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.591820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.591972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.591997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.592011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.592023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.592085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.601842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.601994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.602019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.602033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.602045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.602075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.611942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.612133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.612158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.612172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.612184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.612212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.621919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.622070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.622095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.622109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.622121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.622150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.631976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.632152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.632177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.632190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.632203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.632231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.641989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.642137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.642167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.642183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.642195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.642238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.652016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.652176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.652200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.652214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.652226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.652254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.662022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.662171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.662195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.662209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.662221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.662250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.672044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.672192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.672217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.672231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.672243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.672272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.682082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.682255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.682280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.682293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.682305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.682355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.692140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.692331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.692356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.692371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.692383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.692411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.702134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.702285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.702309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.702323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.702335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.702364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.712198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.712376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.712402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.712417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.712450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.712480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-16 00:28:51.722181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.118 [2024-07-16 00:28:51.722322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.118 [2024-07-16 00:28:51.722347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.118 [2024-07-16 00:28:51.722361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.118 [2024-07-16 00:28:51.722373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.118 [2024-07-16 00:28:51.722402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.732241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.732404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.732435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.732465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.732478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.732506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.742244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.742393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.742418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.742432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.742444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.742485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.752322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.752469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.752494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.752509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.752521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.752550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.762311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.762462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.762487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.762501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.762513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.762542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.772337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.772539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.772564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.772578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.772595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.772625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.782361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.782515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.782540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.782554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.782566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.782594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.792419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.792592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.792617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.792647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.792658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.792704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.802413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.802558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.802584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.802598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.802610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.802639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.812494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.812727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.812752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.812766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.812778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.812820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.822585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.822737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.822762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.822777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.822789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.822817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.832529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.378 [2024-07-16 00:28:51.832680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.378 [2024-07-16 00:28:51.832704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.378 [2024-07-16 00:28:51.832718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.378 [2024-07-16 00:28:51.832730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.378 [2024-07-16 00:28:51.832759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-16 00:28:51.842528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.842681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.842706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.842719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.842731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.842760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.852666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.852825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.852850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.852868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.852892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.852923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.862584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.862733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.862758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.862778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.862791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.862821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.872632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.872782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.872807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.872821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.872833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.872862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.882655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.882798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.882822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.882836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.882848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.882883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.892707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.892896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.892922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.892936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.892948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.892977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.902699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.902850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.902874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.902898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.902910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.902938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.912762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.912927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.912952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.912966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.912978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.913007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.922757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.922925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.922950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.922965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.922976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.923006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.932851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.933025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.933050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.933064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.933076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.933106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.942821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.942969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.942995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.943009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.943020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.943049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.952939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.953146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.953193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.953216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.953229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.953274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.962930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.963076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.963102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.963117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.963129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.963158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.972940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.973102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.973128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.973142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.379 [2024-07-16 00:28:51.973154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.379 [2024-07-16 00:28:51.973183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-16 00:28:51.982948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.379 [2024-07-16 00:28:51.983097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.379 [2024-07-16 00:28:51.983122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.379 [2024-07-16 00:28:51.983136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.380 [2024-07-16 00:28:51.983148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.380 [2024-07-16 00:28:51.983176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-16 00:28:51.992984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.638 [2024-07-16 00:28:51.993136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.638 [2024-07-16 00:28:51.993161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.638 [2024-07-16 00:28:51.993175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.638 [2024-07-16 00:28:51.993187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.638 [2024-07-16 00:28:51.993215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-16 00:28:52.003020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.638 [2024-07-16 00:28:52.003171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.638 [2024-07-16 00:28:52.003197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.638 [2024-07-16 00:28:52.003211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.638 [2024-07-16 00:28:52.003223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.638 [2024-07-16 00:28:52.003252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-16 00:28:52.013060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.638 [2024-07-16 00:28:52.013272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.638 [2024-07-16 00:28:52.013296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.638 [2024-07-16 00:28:52.013311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.638 [2024-07-16 00:28:52.013324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.638 [2024-07-16 00:28:52.013353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-16 00:28:52.023121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.638 [2024-07-16 00:28:52.023343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.638 [2024-07-16 00:28:52.023367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.638 [2024-07-16 00:28:52.023381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.023393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.023448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.033117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.033275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.033301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.033315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.033327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.033371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.043126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.043317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.043357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.043372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.043384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.043413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.053191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.053356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.053382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.053396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.053422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.053451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.063169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.063338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.063364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.063378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.063390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.063418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.073232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.073413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.073439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.073471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.073484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.073528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.083240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.083387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.083412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.083426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.083439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.083487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.093336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.093497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.093522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.093536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.093548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.093592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.103283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.103437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.103462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.103476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.103488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.103517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.113293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.113441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.113466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.113480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.113492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.113521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.123352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.123498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.123524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.123538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.123550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.123595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.133399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.133559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.133588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.133603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.133630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.133659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.143378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.143528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.143552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.143565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.143577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.143605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.153417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.153568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.153593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.153607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.153619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.153648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.163441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.639 [2024-07-16 00:28:52.163593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.639 [2024-07-16 00:28:52.163619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.639 [2024-07-16 00:28:52.163633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.639 [2024-07-16 00:28:52.163645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.639 [2024-07-16 00:28:52.163673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-16 00:28:52.173508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.173674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.173699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.173713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.173730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.173774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-16 00:28:52.183502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.183656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.183681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.183695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.183707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.183736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-16 00:28:52.193534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.193685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.193710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.193724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.193735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.193764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-16 00:28:52.203562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.203714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.203739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.203752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.203764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.203793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-16 00:28:52.213603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.213762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.213786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.213800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.213812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.213841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-16 00:28:52.223621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.223771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.223796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.223809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.223821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.223849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-16 00:28:52.233652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.233826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.233851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.233865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.233884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.233915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-16 00:28:52.243665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.640 [2024-07-16 00:28:52.243816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.640 [2024-07-16 00:28:52.243841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.640 [2024-07-16 00:28:52.243855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.640 [2024-07-16 00:28:52.243867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.640 [2024-07-16 00:28:52.243904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.253720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.253886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.253912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.253926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.253938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.253967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.263742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.263905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.263930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.263950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.263966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.263995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.273761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.273923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.273948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.273963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.273974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.274003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.283781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.283927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.283952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.283967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.283978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.284008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.293836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.294012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.294037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.294051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.294063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.294092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.303888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.304070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.304096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.304115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.304128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.304173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.313864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.314047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.314073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.314087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.314099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.314128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.323983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.900 [2024-07-16 00:28:52.324127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.900 [2024-07-16 00:28:52.324151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.900 [2024-07-16 00:28:52.324166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.900 [2024-07-16 00:28:52.324177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.900 [2024-07-16 00:28:52.324220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.900 qpair failed and we were unable to recover it. 00:25:00.900 [2024-07-16 00:28:52.334011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.334172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.334201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.334216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.334228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.334272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.343942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.344093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.344118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.344132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.344144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.344172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.353981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.354128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.354153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.354173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.354186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.354216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.363998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.364143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.364168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.364181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.364193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.364222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.374173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.374391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.374416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.374430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.374444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.374489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.384141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.384320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.384345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.384359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.384371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.384414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.394089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.394238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.394262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.394276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.394288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.394317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.404143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.404305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.404330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.404343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.404355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.404399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.414302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.414486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.414511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.414525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.414536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.414565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.424195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.424378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.424404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.424437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.424450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.424494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.434215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.434363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.434389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.434404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.434415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.434444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.444316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.444479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.444509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.444524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.444536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.444582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.454277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.454481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.454506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.454520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.454532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.454561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.901 [2024-07-16 00:28:52.464293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.901 [2024-07-16 00:28:52.464443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.901 [2024-07-16 00:28:52.464468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.901 [2024-07-16 00:28:52.464482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.901 [2024-07-16 00:28:52.464494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.901 [2024-07-16 00:28:52.464523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.901 qpair failed and we were unable to recover it. 00:25:00.902 [2024-07-16 00:28:52.474344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.902 [2024-07-16 00:28:52.474507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.902 [2024-07-16 00:28:52.474532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.902 [2024-07-16 00:28:52.474546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.902 [2024-07-16 00:28:52.474558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.902 [2024-07-16 00:28:52.474602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.902 qpair failed and we were unable to recover it. 00:25:00.902 [2024-07-16 00:28:52.484433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.902 [2024-07-16 00:28:52.484635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.902 [2024-07-16 00:28:52.484661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.902 [2024-07-16 00:28:52.484690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.902 [2024-07-16 00:28:52.484702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.902 [2024-07-16 00:28:52.484736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.902 qpair failed and we were unable to recover it. 00:25:00.902 [2024-07-16 00:28:52.494395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.902 [2024-07-16 00:28:52.494553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.902 [2024-07-16 00:28:52.494579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.902 [2024-07-16 00:28:52.494593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.902 [2024-07-16 00:28:52.494605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.902 [2024-07-16 00:28:52.494633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.902 qpair failed and we were unable to recover it. 00:25:00.902 [2024-07-16 00:28:52.504421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.902 [2024-07-16 00:28:52.504580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.902 [2024-07-16 00:28:52.504605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.902 [2024-07-16 00:28:52.504619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.902 [2024-07-16 00:28:52.504631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:00.902 [2024-07-16 00:28:52.504660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.902 qpair failed and we were unable to recover it. 00:25:01.162 [2024-07-16 00:28:52.514423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.162 [2024-07-16 00:28:52.514570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.162 [2024-07-16 00:28:52.514596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.162 [2024-07-16 00:28:52.514610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.162 [2024-07-16 00:28:52.514622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.162 [2024-07-16 00:28:52.514651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.162 qpair failed and we were unable to recover it. 00:25:01.162 [2024-07-16 00:28:52.524587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.162 [2024-07-16 00:28:52.524734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.162 [2024-07-16 00:28:52.524759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.162 [2024-07-16 00:28:52.524773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.162 [2024-07-16 00:28:52.524785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.162 [2024-07-16 00:28:52.524814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.162 qpair failed and we were unable to recover it. 00:25:01.162 [2024-07-16 00:28:52.534531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.162 [2024-07-16 00:28:52.534685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.162 [2024-07-16 00:28:52.534717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.162 [2024-07-16 00:28:52.534732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.162 [2024-07-16 00:28:52.534759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.162 [2024-07-16 00:28:52.534788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.162 qpair failed and we were unable to recover it. 00:25:01.162 [2024-07-16 00:28:52.544548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.162 [2024-07-16 00:28:52.544698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.162 [2024-07-16 00:28:52.544723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.162 [2024-07-16 00:28:52.544737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.162 [2024-07-16 00:28:52.544749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.162 [2024-07-16 00:28:52.544793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.554581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.554741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.554766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.554780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.554793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.554821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.564594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.564737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.564761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.564775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.564787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.564816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.574628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.574785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.574810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.574823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.574840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.574870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.584654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.584803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.584828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.584842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.584854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.584888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.594693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.594836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.594861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.594875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.594895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.594928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.604769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.604966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.604991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.605005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.605017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.605046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.614757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.614922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.614946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.614960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.614972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.615001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.624790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.624961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.624985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.624999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.625012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.625040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.634819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.634978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.635004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.635018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.635033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.635063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.644861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.645018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.645043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.645057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.645069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.645098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.654894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.655058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.655087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.655103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.655115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.655145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.665028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.665184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.665209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.665224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.665245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.665275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.674951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.675126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.675151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.675165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.675177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.675205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.685037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.685233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.163 [2024-07-16 00:28:52.685258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.163 [2024-07-16 00:28:52.685276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.163 [2024-07-16 00:28:52.685289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.163 [2024-07-16 00:28:52.685333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.163 qpair failed and we were unable to recover it. 00:25:01.163 [2024-07-16 00:28:52.695016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.163 [2024-07-16 00:28:52.695186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.695211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.695225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.695237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.695266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.164 [2024-07-16 00:28:52.705066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.164 [2024-07-16 00:28:52.705222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.705246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.705260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.705272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.705316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.164 [2024-07-16 00:28:52.715053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.164 [2024-07-16 00:28:52.715229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.715254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.715268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.715280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.715309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.164 [2024-07-16 00:28:52.725058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.164 [2024-07-16 00:28:52.725207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.725233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.725247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.725259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.725288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.164 [2024-07-16 00:28:52.735124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.164 [2024-07-16 00:28:52.735278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.735303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.735316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.735328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.735371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.164 [2024-07-16 00:28:52.745188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.164 [2024-07-16 00:28:52.745377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.745417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.745431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.745443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.745498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.164 [2024-07-16 00:28:52.755154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.164 [2024-07-16 00:28:52.755301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.755326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.755345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.755358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.755387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.164 [2024-07-16 00:28:52.765178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.164 [2024-07-16 00:28:52.765326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.164 [2024-07-16 00:28:52.765351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.164 [2024-07-16 00:28:52.765365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.164 [2024-07-16 00:28:52.765377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.164 [2024-07-16 00:28:52.765405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.164 qpair failed and we were unable to recover it. 00:25:01.425 [2024-07-16 00:28:52.775254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.425 [2024-07-16 00:28:52.775416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.425 [2024-07-16 00:28:52.775441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.425 [2024-07-16 00:28:52.775455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.425 [2024-07-16 00:28:52.775483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.425 [2024-07-16 00:28:52.775511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.425 qpair failed and we were unable to recover it. 00:25:01.425 [2024-07-16 00:28:52.785225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.425 [2024-07-16 00:28:52.785375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.425 [2024-07-16 00:28:52.785399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.425 [2024-07-16 00:28:52.785413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.425 [2024-07-16 00:28:52.785425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.425 [2024-07-16 00:28:52.785454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.425 qpair failed and we were unable to recover it. 00:25:01.425 [2024-07-16 00:28:52.795288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.425 [2024-07-16 00:28:52.795432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.425 [2024-07-16 00:28:52.795456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.425 [2024-07-16 00:28:52.795470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.425 [2024-07-16 00:28:52.795482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.425 [2024-07-16 00:28:52.795522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.425 qpair failed and we were unable to recover it. 00:25:01.425 [2024-07-16 00:28:52.805281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.425 [2024-07-16 00:28:52.805434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.425 [2024-07-16 00:28:52.805459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.425 [2024-07-16 00:28:52.805473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.425 [2024-07-16 00:28:52.805485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.425 [2024-07-16 00:28:52.805513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.425 qpair failed and we were unable to recover it. 00:25:01.425 [2024-07-16 00:28:52.815394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.425 [2024-07-16 00:28:52.815551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.425 [2024-07-16 00:28:52.815576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.425 [2024-07-16 00:28:52.815590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.425 [2024-07-16 00:28:52.815602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.425 [2024-07-16 00:28:52.815645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.425 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.825367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.825510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.825535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.825549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.825561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.825590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.835368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.835531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.835556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.835570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.835582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.835611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.845397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.845570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.845600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.845615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.845627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.845655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.855480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.855659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.855700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.855714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.855726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.855769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.865529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.865677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.865702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.865716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.865728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.865757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.875512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.875666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.875692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.875706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.875721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.875766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.885534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.885681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.885706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.885721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.885733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.885769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.895549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.895710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.895735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.895748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.895775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.895805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.905556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.905705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.905730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.905744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.905756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.905785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.915601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.915762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.915788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.915802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.915817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.915863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.925635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.925829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.925855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.925869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.925890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.925921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.935673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.935865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.935903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.935919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.935931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.935961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.945668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.945812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.945836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.945850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.945862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.945899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.955710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.955859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.955892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.426 [2024-07-16 00:28:52.955907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.426 [2024-07-16 00:28:52.955919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.426 [2024-07-16 00:28:52.955948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.426 qpair failed and we were unable to recover it. 00:25:01.426 [2024-07-16 00:28:52.965747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.426 [2024-07-16 00:28:52.965898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.426 [2024-07-16 00:28:52.965924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:52.965938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:52.965949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:52.965978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.427 [2024-07-16 00:28:52.975856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.427 [2024-07-16 00:28:52.976061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.427 [2024-07-16 00:28:52.976086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:52.976100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:52.976112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:52.976146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.427 [2024-07-16 00:28:52.985841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.427 [2024-07-16 00:28:52.986011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.427 [2024-07-16 00:28:52.986036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:52.986050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:52.986065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:52.986096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.427 [2024-07-16 00:28:52.995810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.427 [2024-07-16 00:28:52.995968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.427 [2024-07-16 00:28:52.995993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:52.996007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:52.996019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:52.996048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.427 [2024-07-16 00:28:53.005900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.427 [2024-07-16 00:28:53.006107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.427 [2024-07-16 00:28:53.006133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:53.006147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:53.006159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:53.006203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.427 [2024-07-16 00:28:53.015902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.427 [2024-07-16 00:28:53.016092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.427 [2024-07-16 00:28:53.016117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:53.016131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:53.016143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:53.016172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.427 [2024-07-16 00:28:53.025924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.427 [2024-07-16 00:28:53.026082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.427 [2024-07-16 00:28:53.026108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:53.026122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:53.026134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:53.026179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.427 [2024-07-16 00:28:53.035963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.427 [2024-07-16 00:28:53.036113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.427 [2024-07-16 00:28:53.036139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.427 [2024-07-16 00:28:53.036153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.427 [2024-07-16 00:28:53.036165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.427 [2024-07-16 00:28:53.036193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.427 qpair failed and we were unable to recover it. 00:25:01.686 [2024-07-16 00:28:53.045980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.686 [2024-07-16 00:28:53.046133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.686 [2024-07-16 00:28:53.046158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.686 [2024-07-16 00:28:53.046172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.046185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.046214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.056108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.056269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.056294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.056309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.056335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.056364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.066037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.066184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.066209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.066223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.066242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.066272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.076181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.076347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.076373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.076387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.076399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.076445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.086111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.086310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.086336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.086351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.086362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.086403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.096173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.096365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.096390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.096404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.096416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.096445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.106212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.106394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.106419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.106433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.106444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.106474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.116219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.116373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.116399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.116413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.116425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.116453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.126223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.126368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.126393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.126406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.126418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.126461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.136292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.136498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.136523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.136537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.136549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.136577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.146337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.146486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.146510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.146524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.146535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.146580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.156297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.156488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.156514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.156533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.156547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.156588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.166343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.166513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.166539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.166553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.166565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.166609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.176391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.176581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.176623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.176640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.176652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.176695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.687 qpair failed and we were unable to recover it. 00:25:01.687 [2024-07-16 00:28:53.186473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.687 [2024-07-16 00:28:53.186629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.687 [2024-07-16 00:28:53.186669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.687 [2024-07-16 00:28:53.186683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.687 [2024-07-16 00:28:53.186695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.687 [2024-07-16 00:28:53.186738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.688 qpair failed and we were unable to recover it. 00:25:01.688 [2024-07-16 00:28:53.196396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.688 [2024-07-16 00:28:53.196544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.688 [2024-07-16 00:28:53.196570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.688 [2024-07-16 00:28:53.196584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.688 [2024-07-16 00:28:53.196596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.688 [2024-07-16 00:28:53.196625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.688 qpair failed and we were unable to recover it. 00:25:01.688 [2024-07-16 00:28:53.206421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.688 [2024-07-16 00:28:53.206569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.688 [2024-07-16 00:28:53.206594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.688 [2024-07-16 00:28:53.206608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.688 [2024-07-16 00:28:53.206620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.688 [2024-07-16 00:28:53.206648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.688 qpair failed and we were unable to recover it. 00:25:01.688 [2024-07-16 00:28:53.216560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.688 [2024-07-16 00:28:53.216724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.688 [2024-07-16 00:28:53.216749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.688 [2024-07-16 00:28:53.216762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.688 [2024-07-16 00:28:53.216775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.688 [2024-07-16 00:28:53.216803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.688 qpair failed and we were unable to recover it. 00:25:01.688 [2024-07-16 00:28:53.226500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.688 [2024-07-16 00:28:53.226656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.688 [2024-07-16 00:28:53.226681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.688 [2024-07-16 00:28:53.226695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.688 [2024-07-16 00:28:53.226707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.688 [2024-07-16 00:28:53.226736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.688 qpair failed and we were unable to recover it. 00:25:01.688 [2024-07-16 00:28:53.236525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.688 [2024-07-16 00:28:53.236714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.688 [2024-07-16 00:28:53.236754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.688 [2024-07-16 00:28:53.236768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.688 [2024-07-16 00:28:53.236780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cfc000b90 00:25:01.688 [2024-07-16 00:28:53.236808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.688 qpair failed and we were unable to recover it. 00:25:01.688 [2024-07-16 00:28:53.236980] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:01.688 A controller has encountered a failure and is being reset. 00:25:01.945 Controller properly reset. 00:25:01.945 Initializing NVMe Controllers 00:25:01.945 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:01.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:01.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:01.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:01.945 Initialization complete. Launching workers. 00:25:01.945 Starting thread on core 1 00:25:01.945 Starting thread on core 2 00:25:01.946 Starting thread on core 3 00:25:01.946 Starting thread on core 0 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:01.946 00:25:01.946 real 0m10.853s 00:25:01.946 user 0m17.275s 00:25:01.946 sys 0m5.741s 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.946 ************************************ 00:25:01.946 END TEST nvmf_target_disconnect_tc2 00:25:01.946 ************************************ 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.946 rmmod nvme_tcp 00:25:01.946 rmmod nvme_fabrics 00:25:01.946 rmmod nvme_keyring 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2390825 ']' 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2390825 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2390825 ']' 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2390825 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2390825 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2390825' 00:25:01.946 killing process with pid 2390825 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2390825 00:25:01.946 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2390825 00:25:02.509 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.509 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.510 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.510 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.510 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.510 00:28:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.510 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.510 00:28:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.413 00:28:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:04.413 00:25:04.413 real 0m15.577s 00:25:04.413 user 0m43.335s 00:25:04.413 sys 0m7.722s 00:25:04.413 00:28:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:04.413 00:28:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:04.413 ************************************ 00:25:04.413 END TEST nvmf_target_disconnect 00:25:04.413 ************************************ 00:25:04.413 00:28:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:04.413 00:28:55 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:04.413 00:28:55 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.413 00:28:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.413 00:28:55 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:04.413 00:25:04.414 real 19m36.358s 00:25:04.414 user 46m25.914s 00:25:04.414 sys 4m50.060s 00:25:04.414 00:28:55 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:04.414 00:28:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.414 ************************************ 00:25:04.414 END TEST nvmf_tcp 00:25:04.414 ************************************ 00:25:04.414 00:28:55 -- common/autotest_common.sh@1142 -- # return 0 00:25:04.414 00:28:55 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:04.414 00:28:55 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:04.414 00:28:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:04.414 00:28:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:04.414 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:25:04.414 ************************************ 00:25:04.414 START TEST spdkcli_nvmf_tcp 00:25:04.414 ************************************ 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:04.414 * Looking for test storage... 00:25:04.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.414 00:28:55 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2392022 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2392022 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2392022 ']' 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.414 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.674 [2024-07-16 00:28:56.055809] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:25:04.674 [2024-07-16 00:28:56.055915] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392022 ] 00:25:04.674 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.674 [2024-07-16 00:28:56.113355] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:04.674 [2024-07-16 00:28:56.220392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.674 [2024-07-16 00:28:56.220397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.937 00:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:04.937 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:04.937 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:04.937 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:04.937 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:04.937 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:04.937 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:04.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:04.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:04.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:04.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:04.937 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:04.937 ' 00:25:07.486 [2024-07-16 00:28:58.935443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.857 [2024-07-16 00:29:00.175799] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:11.412 [2024-07-16 00:29:02.459057] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:13.314 [2024-07-16 00:29:04.437139] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:14.689 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:14.689 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:14.689 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:14.689 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:14.689 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:14.689 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:14.689 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:14.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:14.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:14.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:14.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:14.689 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:14.689 00:29:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:14.947 00:29:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:14.947 00:29:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:14.947 00:29:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:14.947 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.947 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:15.205 00:29:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:15.205 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:15.205 00:29:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:15.205 00:29:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:15.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:15.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:15.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:15.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:15.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:15.205 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:15.205 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:15.205 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:15.205 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:15.205 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:15.205 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:15.205 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:15.205 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:15.205 ' 00:25:20.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:20.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:20.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:20.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:20.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:20.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:20.472 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:20.472 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:20.472 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:20.472 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:20.472 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:20.472 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:20.472 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:20.472 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2392022 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2392022 ']' 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2392022 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2392022 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2392022' 00:25:20.472 killing process with pid 2392022 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2392022 00:25:20.472 00:29:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2392022 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2392022 ']' 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2392022 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2392022 ']' 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2392022 00:25:20.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2392022) - No such process 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2392022 is not found' 00:25:20.731 Process with pid 2392022 is not found 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:20.731 00:25:20.731 real 0m16.194s 00:25:20.731 user 0m34.262s 00:25:20.731 sys 0m0.839s 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:20.731 00:29:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.731 ************************************ 00:25:20.731 END TEST spdkcli_nvmf_tcp 00:25:20.731 ************************************ 00:25:20.731 00:29:12 -- common/autotest_common.sh@1142 -- # return 0 00:25:20.732 00:29:12 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:20.732 00:29:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:20.732 00:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.732 00:29:12 -- common/autotest_common.sh@10 -- # set +x 00:25:20.732 ************************************ 00:25:20.732 START TEST nvmf_identify_passthru 00:25:20.732 ************************************ 00:25:20.732 00:29:12 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:20.732 * Looking for test storage... 00:25:20.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.732 00:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.732 00:29:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.732 00:29:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.732 00:29:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.732 00:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.732 00:29:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.732 00:29:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.732 00:29:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:20.732 00:29:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.732 00:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.732 00:29:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:20.732 00:29:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:20.732 00:29:12 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.732 00:29:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:22.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:22.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.635 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:22.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:22.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.636 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:25:22.896 00:25:22.896 --- 10.0.0.2 ping statistics --- 00:25:22.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.896 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:25:22.896 00:25:22.896 --- 10.0.0.1 ping statistics --- 00:25:22.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.896 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.896 00:29:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.896 00:29:14 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:22.896 00:29:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:22.896 00:29:14 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:22.896 00:29:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:22.896 00:29:14 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:22.896 00:29:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:22.896 00:29:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:22.896 00:29:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:23.154 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.339 00:29:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:27.339 00:29:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:27.339 00:29:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:27.339 00:29:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:27.339 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.563 00:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:31.563 00:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.563 00:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.563 00:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2396651 00:25:31.563 00:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:31.563 00:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:31.563 00:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2396651 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2396651 ']' 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.563 00:29:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.563 [2024-07-16 00:29:22.963632] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:25:31.563 [2024-07-16 00:29:22.963733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.563 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.563 [2024-07-16 00:29:23.040141] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.563 [2024-07-16 00:29:23.163677] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.563 [2024-07-16 00:29:23.163749] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.563 [2024-07-16 00:29:23.163763] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.563 [2024-07-16 00:29:23.163774] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.563 [2024-07-16 00:29:23.163798] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.563 [2024-07-16 00:29:23.163895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.563 [2024-07-16 00:29:23.163957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.563 [2024-07-16 00:29:23.163982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.563 [2024-07-16 00:29:23.163985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.823 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.823 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:31.824 00:29:23 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.824 INFO: Log level set to 20 00:25:31.824 INFO: Requests: 00:25:31.824 { 00:25:31.824 "jsonrpc": "2.0", 00:25:31.824 "method": "nvmf_set_config", 00:25:31.824 "id": 1, 00:25:31.824 "params": { 00:25:31.824 "admin_cmd_passthru": { 00:25:31.824 "identify_ctrlr": true 00:25:31.824 } 00:25:31.824 } 00:25:31.824 } 00:25:31.824 00:25:31.824 INFO: response: 00:25:31.824 { 00:25:31.824 "jsonrpc": "2.0", 00:25:31.824 "id": 1, 00:25:31.824 "result": true 00:25:31.824 } 00:25:31.824 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.824 00:29:23 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.824 INFO: Setting log level to 20 00:25:31.824 INFO: Setting log level to 20 00:25:31.824 INFO: Log level set to 20 00:25:31.824 INFO: Log level set to 20 00:25:31.824 INFO: Requests: 00:25:31.824 { 00:25:31.824 "jsonrpc": "2.0", 00:25:31.824 "method": "framework_start_init", 00:25:31.824 "id": 1 00:25:31.824 } 00:25:31.824 00:25:31.824 INFO: Requests: 00:25:31.824 { 00:25:31.824 "jsonrpc": "2.0", 00:25:31.824 "method": "framework_start_init", 00:25:31.824 "id": 1 00:25:31.824 } 00:25:31.824 00:25:31.824 [2024-07-16 00:29:23.304275] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:31.824 INFO: response: 00:25:31.824 { 00:25:31.824 "jsonrpc": "2.0", 00:25:31.824 "id": 1, 00:25:31.824 "result": true 00:25:31.824 } 00:25:31.824 00:25:31.824 INFO: response: 00:25:31.824 { 00:25:31.824 "jsonrpc": "2.0", 00:25:31.824 "id": 1, 00:25:31.824 "result": true 00:25:31.824 } 00:25:31.824 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.824 00:29:23 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.824 INFO: Setting log level to 40 00:25:31.824 INFO: Setting log level to 40 00:25:31.824 INFO: Setting log level to 40 00:25:31.824 [2024-07-16 00:29:23.314446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.824 00:29:23 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.824 00:29:23 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.824 00:29:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.118 Nvme0n1 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.118 [2024-07-16 00:29:26.213641] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.118 [ 00:25:35.118 { 00:25:35.118 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:35.118 "subtype": "Discovery", 00:25:35.118 "listen_addresses": [], 00:25:35.118 "allow_any_host": true, 00:25:35.118 "hosts": [] 00:25:35.118 }, 00:25:35.118 { 00:25:35.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.118 "subtype": "NVMe", 00:25:35.118 "listen_addresses": [ 00:25:35.118 { 00:25:35.118 "trtype": "TCP", 00:25:35.118 "adrfam": "IPv4", 00:25:35.118 "traddr": "10.0.0.2", 00:25:35.118 "trsvcid": "4420" 00:25:35.118 } 00:25:35.118 ], 00:25:35.118 "allow_any_host": true, 00:25:35.118 "hosts": [], 00:25:35.118 "serial_number": "SPDK00000000000001", 00:25:35.118 "model_number": "SPDK bdev Controller", 00:25:35.118 "max_namespaces": 1, 00:25:35.118 "min_cntlid": 1, 00:25:35.118 "max_cntlid": 65519, 00:25:35.118 "namespaces": [ 00:25:35.118 { 00:25:35.118 "nsid": 1, 00:25:35.118 "bdev_name": "Nvme0n1", 00:25:35.118 "name": "Nvme0n1", 00:25:35.118 "nguid": "14A63CEE69644845A2DB1D8F8D3D748D", 00:25:35.118 "uuid": "14a63cee-6964-4845-a2db-1d8f8d3d748d" 00:25:35.118 } 00:25:35.118 ] 00:25:35.118 } 00:25:35.118 ] 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:35.118 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:35.118 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:35.118 00:29:26 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.118 rmmod nvme_tcp 00:25:35.118 rmmod nvme_fabrics 00:25:35.118 rmmod nvme_keyring 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2396651 ']' 00:25:35.118 00:29:26 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2396651 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2396651 ']' 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2396651 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2396651 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2396651' 00:25:35.118 killing process with pid 2396651 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2396651 00:25:35.118 00:29:26 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2396651 00:25:37.018 00:29:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.018 00:29:28 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.018 00:29:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.018 00:29:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.018 00:29:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.018 00:29:28 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.018 00:29:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:37.018 00:29:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.924 00:29:30 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.924 00:25:38.924 real 0m18.132s 00:25:38.924 user 0m26.840s 00:25:38.924 sys 0m2.316s 00:25:38.924 00:29:30 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:38.924 00:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.924 ************************************ 00:25:38.924 END TEST nvmf_identify_passthru 00:25:38.924 ************************************ 00:25:38.924 00:29:30 -- common/autotest_common.sh@1142 -- # return 0 00:25:38.924 00:29:30 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:38.924 00:29:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:38.924 00:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.924 00:29:30 -- common/autotest_common.sh@10 -- # set +x 00:25:38.924 ************************************ 00:25:38.924 START TEST nvmf_dif 00:25:38.924 ************************************ 00:25:38.924 00:29:30 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:38.924 * Looking for test storage... 00:25:38.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:38.924 00:29:30 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.924 00:29:30 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.925 00:29:30 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.925 00:29:30 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.925 00:29:30 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.925 00:29:30 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.925 00:29:30 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.925 00:29:30 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.925 00:29:30 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:38.925 00:29:30 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.925 00:29:30 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:38.925 00:29:30 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:38.925 00:29:30 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:38.925 00:29:30 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:38.925 00:29:30 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.925 00:29:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:38.925 00:29:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:38.925 00:29:30 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.925 00:29:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:41.451 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:41.451 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:41.451 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:41.451 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.451 00:29:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:25:41.452 00:25:41.452 --- 10.0.0.2 ping statistics --- 00:25:41.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.452 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:25:41.452 00:25:41.452 --- 10.0.0.1 ping statistics --- 00:25:41.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.452 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:41.452 00:29:32 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:42.018 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:42.018 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:42.018 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:42.018 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:42.018 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:42.018 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:42.018 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:42.018 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:42.018 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:42.018 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:42.018 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:42.018 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:42.018 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:42.018 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:42.018 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:42.018 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:42.018 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:42.276 00:29:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:42.276 00:29:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2399796 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:42.276 00:29:33 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2399796 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2399796 ']' 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.276 00:29:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 [2024-07-16 00:29:33.854736] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:25:42.276 [2024-07-16 00:29:33.854818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.276 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.535 [2024-07-16 00:29:33.925264] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.535 [2024-07-16 00:29:34.044376] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.535 [2024-07-16 00:29:34.044439] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.535 [2024-07-16 00:29:34.044454] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.535 [2024-07-16 00:29:34.044468] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.535 [2024-07-16 00:29:34.044479] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.535 [2024-07-16 00:29:34.044511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:43.470 00:29:34 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 00:29:34 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.470 00:29:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:43.470 00:29:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 [2024-07-16 00:29:34.868407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.470 00:29:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.470 00:29:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 ************************************ 00:25:43.470 START TEST fio_dif_1_default 00:25:43.470 ************************************ 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 bdev_null0 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 [2024-07-16 00:29:34.928691] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:43.470 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:43.471 { 00:25:43.471 "params": { 00:25:43.471 "name": "Nvme$subsystem", 00:25:43.471 "trtype": "$TEST_TRANSPORT", 00:25:43.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.471 "adrfam": "ipv4", 00:25:43.471 "trsvcid": "$NVMF_PORT", 00:25:43.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.471 "hdgst": ${hdgst:-false}, 00:25:43.471 "ddgst": ${ddgst:-false} 00:25:43.471 }, 00:25:43.471 "method": "bdev_nvme_attach_controller" 00:25:43.471 } 00:25:43.471 EOF 00:25:43.471 )") 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:43.471 "params": { 00:25:43.471 "name": "Nvme0", 00:25:43.471 "trtype": "tcp", 00:25:43.471 "traddr": "10.0.0.2", 00:25:43.471 "adrfam": "ipv4", 00:25:43.471 "trsvcid": "4420", 00:25:43.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:43.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:43.471 "hdgst": false, 00:25:43.471 "ddgst": false 00:25:43.471 }, 00:25:43.471 "method": "bdev_nvme_attach_controller" 00:25:43.471 }' 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:43.471 00:29:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.729 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:43.729 fio-3.35 00:25:43.729 Starting 1 thread 00:25:43.729 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.939 00:25:55.939 filename0: (groupid=0, jobs=1): err= 0: pid=2400156: Tue Jul 16 00:29:45 2024 00:25:55.939 read: IOPS=187, BW=749KiB/s (767kB/s)(7504KiB/10019msec) 00:25:55.939 slat (nsec): min=4993, max=57685, avg=10142.20, stdev=2953.91 00:25:55.939 clat (usec): min=854, max=45443, avg=21330.11, stdev=20373.59 00:25:55.939 lat (usec): min=862, max=45460, avg=21340.26, stdev=20373.27 00:25:55.939 clat percentiles (usec): 00:25:55.939 | 1.00th=[ 881], 5.00th=[ 898], 10.00th=[ 906], 20.00th=[ 914], 00:25:55.939 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[41157], 60.00th=[41157], 00:25:55.939 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:55.939 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:25:55.939 | 99.99th=[45351] 00:25:55.939 bw ( KiB/s): min= 704, max= 768, per=99.87%, avg=748.80, stdev=30.09, samples=20 00:25:55.939 iops : min= 176, max= 192, avg=187.20, stdev= 7.52, samples=20 00:25:55.939 lat (usec) : 1000=49.25% 00:25:55.939 lat (msec) : 2=0.64%, 50=50.11% 00:25:55.939 cpu : usr=88.79%, sys=10.90%, ctx=14, majf=0, minf=173 00:25:55.939 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:55.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.939 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.939 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:55.939 00:25:55.939 Run status group 0 (all jobs): 00:25:55.939 READ: bw=749KiB/s (767kB/s), 749KiB/s-749KiB/s (767kB/s-767kB/s), io=7504KiB (7684kB), run=10019-10019msec 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.939 00:25:55.939 real 0m11.288s 00:25:55.939 user 0m10.241s 00:25:55.939 sys 0m1.366s 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 ************************************ 00:25:55.939 END TEST fio_dif_1_default 00:25:55.939 ************************************ 00:25:55.939 00:29:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:55.939 00:29:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:55.939 00:29:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:55.939 00:29:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 ************************************ 00:25:55.939 START TEST fio_dif_1_multi_subsystems 00:25:55.939 ************************************ 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 bdev_null0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 [2024-07-16 00:29:46.272646] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.939 bdev_null1 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:55.939 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:55.940 { 00:25:55.940 "params": { 00:25:55.940 "name": "Nvme$subsystem", 00:25:55.940 "trtype": "$TEST_TRANSPORT", 00:25:55.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:55.940 "adrfam": "ipv4", 00:25:55.940 "trsvcid": "$NVMF_PORT", 00:25:55.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:55.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:55.940 "hdgst": ${hdgst:-false}, 00:25:55.940 "ddgst": ${ddgst:-false} 00:25:55.940 }, 00:25:55.940 "method": "bdev_nvme_attach_controller" 00:25:55.940 } 00:25:55.940 EOF 00:25:55.940 )") 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:55.940 { 00:25:55.940 "params": { 00:25:55.940 "name": "Nvme$subsystem", 00:25:55.940 "trtype": "$TEST_TRANSPORT", 00:25:55.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:55.940 "adrfam": "ipv4", 00:25:55.940 "trsvcid": "$NVMF_PORT", 00:25:55.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:55.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:55.940 "hdgst": ${hdgst:-false}, 00:25:55.940 "ddgst": ${ddgst:-false} 00:25:55.940 }, 00:25:55.940 "method": "bdev_nvme_attach_controller" 00:25:55.940 } 00:25:55.940 EOF 00:25:55.940 )") 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:55.940 "params": { 00:25:55.940 "name": "Nvme0", 00:25:55.940 "trtype": "tcp", 00:25:55.940 "traddr": "10.0.0.2", 00:25:55.940 "adrfam": "ipv4", 00:25:55.940 "trsvcid": "4420", 00:25:55.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:55.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:55.940 "hdgst": false, 00:25:55.940 "ddgst": false 00:25:55.940 }, 00:25:55.940 "method": "bdev_nvme_attach_controller" 00:25:55.940 },{ 00:25:55.940 "params": { 00:25:55.940 "name": "Nvme1", 00:25:55.940 "trtype": "tcp", 00:25:55.940 "traddr": "10.0.0.2", 00:25:55.940 "adrfam": "ipv4", 00:25:55.940 "trsvcid": "4420", 00:25:55.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:55.940 "hdgst": false, 00:25:55.940 "ddgst": false 00:25:55.940 }, 00:25:55.940 "method": "bdev_nvme_attach_controller" 00:25:55.940 }' 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:55.940 00:29:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.940 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:55.940 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:55.940 fio-3.35 00:25:55.940 Starting 2 threads 00:25:55.940 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.932 00:26:05.932 filename0: (groupid=0, jobs=1): err= 0: pid=2401560: Tue Jul 16 00:29:57 2024 00:26:05.932 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10031msec) 00:26:05.932 slat (nsec): min=7853, max=29261, avg=9839.09, stdev=2563.04 00:26:05.932 clat (usec): min=40892, max=46779, avg=41418.83, stdev=600.05 00:26:05.932 lat (usec): min=40900, max=46808, avg=41428.67, stdev=600.22 00:26:05.932 clat percentiles (usec): 00:26:05.932 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:05.932 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:26:05.932 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:05.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:26:05.932 | 99.99th=[46924] 00:26:05.932 bw ( KiB/s): min= 352, max= 416, per=33.85%, avg=385.60, stdev=12.61, samples=20 00:26:05.932 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:26:05.932 lat (msec) : 50=100.00% 00:26:05.932 cpu : usr=93.61%, sys=6.10%, ctx=19, majf=0, minf=135 00:26:05.932 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.932 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.932 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:05.932 filename1: (groupid=0, jobs=1): err= 0: pid=2401561: Tue Jul 16 00:29:57 2024 00:26:05.932 read: IOPS=187, BW=752KiB/s (770kB/s)(7536KiB/10027msec) 00:26:05.932 slat (usec): min=6, max=119, avg= 9.73, stdev= 3.42 00:26:05.932 clat (usec): min=807, max=46820, avg=21258.24, stdev=20303.89 00:26:05.932 lat (usec): min=815, max=46851, avg=21267.97, stdev=20303.71 00:26:05.932 clat percentiles (usec): 00:26:05.932 | 1.00th=[ 865], 5.00th=[ 881], 10.00th=[ 889], 20.00th=[ 906], 00:26:05.932 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:26:05.932 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:05.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:26:05.932 | 99.99th=[46924] 00:26:05.932 bw ( KiB/s): min= 704, max= 768, per=66.12%, avg=752.00, stdev=26.47, samples=20 00:26:05.932 iops : min= 176, max= 192, avg=188.00, stdev= 6.62, samples=20 00:26:05.932 lat (usec) : 1000=48.73% 00:26:05.932 lat (msec) : 2=1.17%, 50=50.11% 00:26:05.932 cpu : usr=93.90%, sys=5.81%, ctx=13, majf=0, minf=171 00:26:05.932 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.932 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.932 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:05.932 00:26:05.932 Run status group 0 (all jobs): 00:26:05.932 READ: bw=1137KiB/s (1165kB/s), 386KiB/s-752KiB/s (395kB/s-770kB/s), io=11.1MiB (11.7MB), run=10027-10031msec 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.190 00:26:06.190 real 0m11.511s 00:26:06.190 user 0m20.275s 00:26:06.190 sys 0m1.553s 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:06.190 00:29:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 ************************************ 00:26:06.190 END TEST fio_dif_1_multi_subsystems 00:26:06.190 ************************************ 00:26:06.190 00:29:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:06.190 00:29:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:06.190 00:29:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:06.190 00:29:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.190 00:29:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 ************************************ 00:26:06.448 START TEST fio_dif_rand_params 00:26:06.448 ************************************ 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 bdev_null0 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 [2024-07-16 00:29:57.837288] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.448 { 00:26:06.448 "params": { 00:26:06.448 "name": "Nvme$subsystem", 00:26:06.448 "trtype": "$TEST_TRANSPORT", 00:26:06.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.448 "adrfam": "ipv4", 00:26:06.448 "trsvcid": "$NVMF_PORT", 00:26:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.448 "hdgst": ${hdgst:-false}, 00:26:06.448 "ddgst": ${ddgst:-false} 00:26:06.448 }, 00:26:06.448 "method": "bdev_nvme_attach_controller" 00:26:06.448 } 00:26:06.448 EOF 00:26:06.448 )") 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:06.448 "params": { 00:26:06.448 "name": "Nvme0", 00:26:06.448 "trtype": "tcp", 00:26:06.448 "traddr": "10.0.0.2", 00:26:06.448 "adrfam": "ipv4", 00:26:06.448 "trsvcid": "4420", 00:26:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.448 "hdgst": false, 00:26:06.448 "ddgst": false 00:26:06.448 }, 00:26:06.448 "method": "bdev_nvme_attach_controller" 00:26:06.448 }' 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:06.448 00:29:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.706 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:06.706 ... 00:26:06.706 fio-3.35 00:26:06.706 Starting 3 threads 00:26:06.706 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.280 00:26:13.280 filename0: (groupid=0, jobs=1): err= 0: pid=2402958: Tue Jul 16 00:30:03 2024 00:26:13.281 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(116MiB/5005msec) 00:26:13.281 slat (nsec): min=7437, max=82149, avg=12595.87, stdev=3724.84 00:26:13.281 clat (usec): min=5497, max=95415, avg=16229.92, stdev=14514.65 00:26:13.281 lat (usec): min=5508, max=95428, avg=16242.52, stdev=14514.56 00:26:13.281 clat percentiles (usec): 00:26:13.281 | 1.00th=[ 6194], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 9372], 00:26:13.281 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11600], 60.00th=[13042], 00:26:13.281 | 70.00th=[14091], 80.00th=[15008], 90.00th=[50070], 95.00th=[53216], 00:26:13.281 | 99.00th=[56886], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:26:13.281 | 99.99th=[94897] 00:26:13.281 bw ( KiB/s): min=18432, max=30720, per=32.44%, avg=23603.20, stdev=3842.46, samples=10 00:26:13.281 iops : min= 144, max= 240, avg=184.40, stdev=30.02, samples=10 00:26:13.281 lat (msec) : 10=28.68%, 20=60.50%, 50=1.08%, 100=9.74% 00:26:13.281 cpu : usr=89.53%, sys=9.85%, ctx=16, majf=0, minf=116 00:26:13.281 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.281 issued rwts: total=924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.281 filename0: (groupid=0, jobs=1): err= 0: pid=2402959: Tue Jul 16 00:30:03 2024 00:26:13.281 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(124MiB/5030msec) 00:26:13.281 slat (nsec): min=7368, max=34724, avg=12806.93, stdev=3501.53 00:26:13.281 clat (usec): min=5988, max=91832, avg=15237.32, stdev=13091.91 00:26:13.281 lat (usec): min=6000, max=91844, avg=15250.12, stdev=13091.72 00:26:13.281 clat percentiles (usec): 00:26:13.281 | 1.00th=[ 6259], 5.00th=[ 7308], 10.00th=[ 8356], 20.00th=[ 9110], 00:26:13.281 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[11207], 60.00th=[12256], 00:26:13.281 | 70.00th=[13304], 80.00th=[14091], 90.00th=[17957], 95.00th=[51643], 00:26:13.281 | 99.00th=[54789], 99.50th=[55313], 99.90th=[91751], 99.95th=[91751], 00:26:13.281 | 99.99th=[91751] 00:26:13.281 bw ( KiB/s): min=21504, max=28672, per=34.70%, avg=25247.10, stdev=2450.24, samples=10 00:26:13.281 iops : min= 168, max= 224, avg=197.20, stdev=19.10, samples=10 00:26:13.281 lat (msec) : 10=36.60%, 20=53.49%, 50=1.92%, 100=7.99% 00:26:13.281 cpu : usr=91.11%, sys=8.39%, ctx=8, majf=0, minf=84 00:26:13.281 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.281 issued rwts: total=989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.281 filename0: (groupid=0, jobs=1): err= 0: pid=2402960: Tue Jul 16 00:30:03 2024 00:26:13.281 read: IOPS=188, BW=23.6MiB/s (24.8MB/s)(118MiB/5008msec) 00:26:13.281 slat (nsec): min=7399, max=77754, avg=12862.60, stdev=4519.61 00:26:13.281 clat (usec): min=5400, max=93621, avg=15861.26, stdev=13793.84 00:26:13.281 lat (usec): min=5411, max=93634, avg=15874.12, stdev=13793.97 00:26:13.281 clat percentiles (usec): 00:26:13.281 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 8979], 00:26:13.281 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11469], 60.00th=[12518], 00:26:13.281 | 70.00th=[13435], 80.00th=[14222], 90.00th=[50070], 95.00th=[52691], 00:26:13.281 | 99.00th=[54789], 99.50th=[55313], 99.90th=[93848], 99.95th=[93848], 00:26:13.281 | 99.99th=[93848] 00:26:13.281 bw ( KiB/s): min=19456, max=31232, per=33.18%, avg=24140.80, stdev=3602.34, samples=10 00:26:13.281 iops : min= 152, max= 244, avg=188.60, stdev=28.14, samples=10 00:26:13.281 lat (msec) : 10=34.14%, 20=53.91%, 50=1.59%, 100=10.36% 00:26:13.281 cpu : usr=90.53%, sys=8.95%, ctx=22, majf=0, minf=148 00:26:13.281 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.281 issued rwts: total=946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.281 00:26:13.281 Run status group 0 (all jobs): 00:26:13.281 READ: bw=71.0MiB/s (74.5MB/s), 23.1MiB/s-24.6MiB/s (24.2MB/s-25.8MB/s), io=357MiB (375MB), run=5005-5030msec 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 bdev_null0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 [2024-07-16 00:30:04.114107] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 bdev_null1 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 bdev_null2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.281 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.282 { 00:26:13.282 "params": { 00:26:13.282 "name": "Nvme$subsystem", 00:26:13.282 "trtype": "$TEST_TRANSPORT", 00:26:13.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.282 "adrfam": "ipv4", 00:26:13.282 "trsvcid": "$NVMF_PORT", 00:26:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.282 "hdgst": ${hdgst:-false}, 00:26:13.282 "ddgst": ${ddgst:-false} 00:26:13.282 }, 00:26:13.282 "method": "bdev_nvme_attach_controller" 00:26:13.282 } 00:26:13.282 EOF 00:26:13.282 )") 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.282 { 00:26:13.282 "params": { 00:26:13.282 "name": "Nvme$subsystem", 00:26:13.282 "trtype": "$TEST_TRANSPORT", 00:26:13.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.282 "adrfam": "ipv4", 00:26:13.282 "trsvcid": "$NVMF_PORT", 00:26:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.282 "hdgst": ${hdgst:-false}, 00:26:13.282 "ddgst": ${ddgst:-false} 00:26:13.282 }, 00:26:13.282 "method": "bdev_nvme_attach_controller" 00:26:13.282 } 00:26:13.282 EOF 00:26:13.282 )") 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.282 { 00:26:13.282 "params": { 00:26:13.282 "name": "Nvme$subsystem", 00:26:13.282 "trtype": "$TEST_TRANSPORT", 00:26:13.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.282 "adrfam": "ipv4", 00:26:13.282 "trsvcid": "$NVMF_PORT", 00:26:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.282 "hdgst": ${hdgst:-false}, 00:26:13.282 "ddgst": ${ddgst:-false} 00:26:13.282 }, 00:26:13.282 "method": "bdev_nvme_attach_controller" 00:26:13.282 } 00:26:13.282 EOF 00:26:13.282 )") 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:13.282 "params": { 00:26:13.282 "name": "Nvme0", 00:26:13.282 "trtype": "tcp", 00:26:13.282 "traddr": "10.0.0.2", 00:26:13.282 "adrfam": "ipv4", 00:26:13.282 "trsvcid": "4420", 00:26:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.282 "hdgst": false, 00:26:13.282 "ddgst": false 00:26:13.282 }, 00:26:13.282 "method": "bdev_nvme_attach_controller" 00:26:13.282 },{ 00:26:13.282 "params": { 00:26:13.282 "name": "Nvme1", 00:26:13.282 "trtype": "tcp", 00:26:13.282 "traddr": "10.0.0.2", 00:26:13.282 "adrfam": "ipv4", 00:26:13.282 "trsvcid": "4420", 00:26:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.282 "hdgst": false, 00:26:13.282 "ddgst": false 00:26:13.282 }, 00:26:13.282 "method": "bdev_nvme_attach_controller" 00:26:13.282 },{ 00:26:13.282 "params": { 00:26:13.282 "name": "Nvme2", 00:26:13.282 "trtype": "tcp", 00:26:13.282 "traddr": "10.0.0.2", 00:26:13.282 "adrfam": "ipv4", 00:26:13.282 "trsvcid": "4420", 00:26:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:13.282 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:13.282 "hdgst": false, 00:26:13.282 "ddgst": false 00:26:13.282 }, 00:26:13.282 "method": "bdev_nvme_attach_controller" 00:26:13.282 }' 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:13.282 00:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.282 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:13.282 ... 00:26:13.282 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:13.282 ... 00:26:13.282 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:13.282 ... 00:26:13.282 fio-3.35 00:26:13.282 Starting 24 threads 00:26:13.282 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.481 00:26:25.481 filename0: (groupid=0, jobs=1): err= 0: pid=2403948: Tue Jul 16 00:30:15 2024 00:26:25.481 read: IOPS=263, BW=1052KiB/s (1077kB/s)(10.4MiB/10120msec) 00:26:25.481 slat (usec): min=7, max=252, avg=30.37, stdev=19.76 00:26:25.481 clat (msec): min=10, max=355, avg=60.42, stdev=73.93 00:26:25.481 lat (msec): min=10, max=355, avg=60.45, stdev=73.93 00:26:25.481 clat percentiles (msec): 00:26:25.481 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.481 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.481 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 241], 95.00th=[ 268], 00:26:25.481 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 355], 00:26:25.481 | 99.99th=[ 355] 00:26:25.481 bw ( KiB/s): min= 144, max= 1968, per=4.32%, avg=1058.40, stdev=844.33, samples=20 00:26:25.481 iops : min= 36, max= 492, avg=264.60, stdev=211.08, samples=20 00:26:25.481 lat (msec) : 20=1.16%, 50=86.21%, 100=0.60%, 250=4.81%, 500=7.21% 00:26:25.481 cpu : usr=93.75%, sys=3.09%, ctx=118, majf=0, minf=33 00:26:25.482 IO depths : 1=5.1%, 2=11.0%, 4=24.0%, 8=52.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename0: (groupid=0, jobs=1): err= 0: pid=2403949: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=247, BW=991KiB/s (1014kB/s)(9984KiB/10078msec) 00:26:25.482 slat (usec): min=8, max=102, avg=35.72, stdev=17.55 00:26:25.482 clat (msec): min=31, max=500, avg=64.31, stdev=98.86 00:26:25.482 lat (msec): min=31, max=500, avg=64.35, stdev=98.86 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 363], 00:26:25.482 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 502], 99.95th=[ 502], 00:26:25.482 | 99.99th=[ 502] 00:26:25.482 bw ( KiB/s): min= 128, max= 2032, per=4.05%, avg=992.00, stdev=867.47, samples=20 00:26:25.482 iops : min= 32, max= 508, avg=248.00, stdev=216.87, samples=20 00:26:25.482 lat (msec) : 50=90.30%, 100=0.64%, 250=0.16%, 500=8.57%, 750=0.32% 00:26:25.482 cpu : usr=97.67%, sys=1.60%, ctx=51, majf=0, minf=35 00:26:25.482 IO depths : 1=1.9%, 2=8.1%, 4=24.9%, 8=54.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename0: (groupid=0, jobs=1): err= 0: pid=2403950: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=256, BW=1025KiB/s (1050kB/s)(10.1MiB/10104msec) 00:26:25.482 slat (usec): min=8, max=109, avg=43.23, stdev=25.12 00:26:25.482 clat (msec): min=22, max=398, avg=61.98, stdev=79.19 00:26:25.482 lat (msec): min=22, max=398, avg=62.02, stdev=79.18 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 245], 95.00th=[ 271], 00:26:25.482 | 99.00th=[ 376], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:26:25.482 | 99.99th=[ 397] 00:26:25.482 bw ( KiB/s): min= 128, max= 2019, per=4.19%, avg=1028.95, stdev=836.37, samples=20 00:26:25.482 iops : min= 32, max= 504, avg=257.20, stdev=209.05, samples=20 00:26:25.482 lat (msec) : 50=87.10%, 100=1.16%, 250=3.55%, 500=8.19% 00:26:25.482 cpu : usr=96.91%, sys=1.76%, ctx=38, majf=0, minf=33 00:26:25.482 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename0: (groupid=0, jobs=1): err= 0: pid=2403951: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.2MiB/10104msec) 00:26:25.482 slat (usec): min=8, max=140, avg=26.59, stdev=21.58 00:26:25.482 clat (msec): min=21, max=330, avg=61.61, stdev=73.70 00:26:25.482 lat (msec): min=21, max=330, avg=61.64, stdev=73.70 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 243], 95.00th=[ 264], 00:26:25.482 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 330], 00:26:25.482 | 99.99th=[ 330] 00:26:25.482 bw ( KiB/s): min= 240, max= 1956, per=4.23%, avg=1037.80, stdev=818.70, samples=20 00:26:25.482 iops : min= 60, max= 489, avg=259.45, stdev=204.68, samples=20 00:26:25.482 lat (msec) : 50=86.60%, 100=0.61%, 250=4.59%, 500=8.19% 00:26:25.482 cpu : usr=98.16%, sys=1.38%, ctx=29, majf=0, minf=65 00:26:25.482 IO depths : 1=1.7%, 2=7.4%, 4=23.7%, 8=56.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename0: (groupid=0, jobs=1): err= 0: pid=2403952: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=258, BW=1032KiB/s (1057kB/s)(10.2MiB/10096msec) 00:26:25.482 slat (nsec): min=8406, max=98014, avg=34129.46, stdev=16788.76 00:26:25.482 clat (msec): min=21, max=281, avg=61.64, stdev=73.75 00:26:25.482 lat (msec): min=21, max=281, avg=61.68, stdev=73.74 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 243], 95.00th=[ 264], 00:26:25.482 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 284], 00:26:25.482 | 99.99th=[ 284] 00:26:25.482 bw ( KiB/s): min= 240, max= 1920, per=4.23%, avg=1036.00, stdev=816.62, samples=20 00:26:25.482 iops : min= 60, max= 480, avg=259.00, stdev=204.15, samples=20 00:26:25.482 lat (msec) : 50=86.42%, 100=0.77%, 250=4.60%, 500=8.21% 00:26:25.482 cpu : usr=98.46%, sys=1.13%, ctx=25, majf=0, minf=37 00:26:25.482 IO depths : 1=4.3%, 2=10.4%, 4=24.8%, 8=52.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename0: (groupid=0, jobs=1): err= 0: pid=2403953: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=257, BW=1031KiB/s (1056kB/s)(10.2MiB/10104msec) 00:26:25.482 slat (usec): min=8, max=100, avg=24.00, stdev=12.89 00:26:25.482 clat (msec): min=24, max=424, avg=61.77, stdev=76.95 00:26:25.482 lat (msec): min=24, max=424, avg=61.79, stdev=76.95 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 247], 95.00th=[ 259], 00:26:25.482 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 426], 99.95th=[ 426], 00:26:25.482 | 99.99th=[ 426] 00:26:25.482 bw ( KiB/s): min= 176, max= 1920, per=4.22%, avg=1034.60, stdev=828.09, samples=20 00:26:25.482 iops : min= 44, max= 480, avg=258.65, stdev=207.02, samples=20 00:26:25.482 lat (msec) : 50=87.71%, 100=0.15%, 250=2.53%, 500=9.60% 00:26:25.482 cpu : usr=96.92%, sys=1.84%, ctx=94, majf=0, minf=46 00:26:25.482 IO depths : 1=5.5%, 2=11.1%, 4=23.0%, 8=53.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename0: (groupid=0, jobs=1): err= 0: pid=2403954: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=262, BW=1049KiB/s (1074kB/s)(10.4MiB/10122msec) 00:26:25.482 slat (usec): min=6, max=105, avg=40.73, stdev=21.15 00:26:25.482 clat (msec): min=12, max=349, avg=60.64, stdev=73.48 00:26:25.482 lat (msec): min=12, max=349, avg=60.68, stdev=73.47 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 243], 95.00th=[ 264], 00:26:25.482 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 351], 00:26:25.482 | 99.99th=[ 351] 00:26:25.482 bw ( KiB/s): min= 240, max= 2048, per=4.30%, avg=1055.35, stdev=835.57, samples=20 00:26:25.482 iops : min= 60, max= 512, avg=263.80, stdev=208.85, samples=20 00:26:25.482 lat (msec) : 20=1.21%, 50=86.21%, 250=4.52%, 500=8.06% 00:26:25.482 cpu : usr=98.09%, sys=1.49%, ctx=15, majf=0, minf=29 00:26:25.482 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename0: (groupid=0, jobs=1): err= 0: pid=2403955: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=256, BW=1025KiB/s (1049kB/s)(10.1MiB/10085msec) 00:26:25.482 slat (usec): min=8, max=118, avg=62.55, stdev=26.97 00:26:25.482 clat (msec): min=17, max=397, avg=61.62, stdev=75.68 00:26:25.482 lat (msec): min=17, max=397, avg=61.68, stdev=75.66 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 245], 95.00th=[ 259], 00:26:25.482 | 99.00th=[ 305], 99.50th=[ 334], 99.90th=[ 397], 99.95th=[ 397], 00:26:25.482 | 99.99th=[ 397] 00:26:25.482 bw ( KiB/s): min= 176, max= 1920, per=4.21%, avg=1031.20, stdev=821.64, samples=20 00:26:25.482 iops : min= 44, max= 480, avg=257.80, stdev=205.41, samples=20 00:26:25.482 lat (msec) : 20=0.62%, 50=85.91%, 100=1.01%, 250=3.87%, 500=8.59% 00:26:25.482 cpu : usr=98.11%, sys=1.47%, ctx=15, majf=0, minf=33 00:26:25.482 IO depths : 1=3.8%, 2=8.8%, 4=20.9%, 8=57.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:25.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 complete : 0=0.0%, 4=93.1%, 8=1.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.482 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.482 filename1: (groupid=0, jobs=1): err= 0: pid=2403956: Tue Jul 16 00:30:15 2024 00:26:25.482 read: IOPS=255, BW=1022KiB/s (1046kB/s)(10.1MiB/10087msec) 00:26:25.482 slat (usec): min=8, max=124, avg=41.74, stdev=26.41 00:26:25.482 clat (msec): min=25, max=411, avg=62.27, stdev=80.90 00:26:25.482 lat (msec): min=25, max=411, avg=62.31, stdev=80.89 00:26:25.482 clat percentiles (msec): 00:26:25.482 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.482 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:25.482 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 241], 95.00th=[ 271], 00:26:25.482 | 99.00th=[ 397], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:26:25.482 | 99.99th=[ 414] 00:26:25.482 bw ( KiB/s): min= 128, max= 2048, per=4.18%, avg=1024.00, stdev=834.72, samples=20 00:26:25.482 iops : min= 32, max= 512, avg=256.00, stdev=208.68, samples=20 00:26:25.482 lat (msec) : 50=87.58%, 100=0.62%, 250=4.35%, 500=7.45% 00:26:25.482 cpu : usr=97.32%, sys=1.79%, ctx=193, majf=0, minf=29 00:26:25.482 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename1: (groupid=0, jobs=1): err= 0: pid=2403957: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=258, BW=1032KiB/s (1057kB/s)(10.2MiB/10115msec) 00:26:25.483 slat (usec): min=6, max=278, avg=29.67, stdev=12.84 00:26:25.483 clat (msec): min=16, max=425, avg=61.78, stdev=79.56 00:26:25.483 lat (msec): min=16, max=425, avg=61.81, stdev=79.55 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 245], 95.00th=[ 268], 00:26:25.483 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 426], 99.95th=[ 426], 00:26:25.483 | 99.99th=[ 426] 00:26:25.483 bw ( KiB/s): min= 128, max= 2032, per=4.23%, avg=1037.75, stdev=839.90, samples=20 00:26:25.483 iops : min= 32, max= 508, avg=259.40, stdev=209.94, samples=20 00:26:25.483 lat (msec) : 20=0.23%, 50=88.05%, 250=2.07%, 500=9.66% 00:26:25.483 cpu : usr=95.29%, sys=2.71%, ctx=58, majf=0, minf=32 00:26:25.483 IO depths : 1=2.0%, 2=7.8%, 4=23.6%, 8=56.0%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename1: (groupid=0, jobs=1): err= 0: pid=2403958: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=246, BW=986KiB/s (1010kB/s)(9944KiB/10082msec) 00:26:25.483 slat (nsec): min=8259, max=89037, avg=32130.77, stdev=17116.84 00:26:25.483 clat (msec): min=18, max=534, avg=64.62, stdev=99.04 00:26:25.483 lat (msec): min=18, max=534, avg=64.65, stdev=99.03 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 50], 95.00th=[ 376], 00:26:25.483 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 531], 99.95th=[ 535], 00:26:25.483 | 99.99th=[ 535] 00:26:25.483 bw ( KiB/s): min= 128, max= 1968, per=4.03%, avg=988.00, stdev=862.23, samples=20 00:26:25.483 iops : min= 32, max= 492, avg=247.00, stdev=215.56, samples=20 00:26:25.483 lat (msec) : 20=0.24%, 50=89.94%, 100=0.72%, 250=0.32%, 500=8.61% 00:26:25.483 lat (msec) : 750=0.16% 00:26:25.483 cpu : usr=97.90%, sys=1.41%, ctx=75, majf=0, minf=37 00:26:25.483 IO depths : 1=1.7%, 2=7.8%, 4=24.4%, 8=55.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename1: (groupid=0, jobs=1): err= 0: pid=2403959: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=247, BW=990KiB/s (1014kB/s)(9984KiB/10081msec) 00:26:25.483 slat (usec): min=8, max=116, avg=44.63, stdev=23.78 00:26:25.483 clat (msec): min=31, max=517, avg=64.19, stdev=99.25 00:26:25.483 lat (msec): min=31, max=517, avg=64.24, stdev=99.24 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 368], 00:26:25.483 | 99.00th=[ 422], 99.50th=[ 435], 99.90th=[ 518], 99.95th=[ 518], 00:26:25.483 | 99.99th=[ 518] 00:26:25.483 bw ( KiB/s): min= 128, max= 2048, per=4.04%, avg=992.00, stdev=868.90, samples=20 00:26:25.483 iops : min= 32, max= 512, avg=248.00, stdev=217.22, samples=20 00:26:25.483 lat (msec) : 50=90.38%, 100=0.64%, 250=0.16%, 500=8.57%, 750=0.24% 00:26:25.483 cpu : usr=95.14%, sys=2.69%, ctx=103, majf=0, minf=24 00:26:25.483 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename1: (groupid=0, jobs=1): err= 0: pid=2403960: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=258, BW=1036KiB/s (1061kB/s)(10.2MiB/10117msec) 00:26:25.483 slat (usec): min=7, max=115, avg=21.05, stdev=15.92 00:26:25.483 clat (msec): min=20, max=393, avg=61.50, stdev=76.60 00:26:25.483 lat (msec): min=20, max=393, avg=61.52, stdev=76.60 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 247], 95.00th=[ 259], 00:26:25.483 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:26:25.483 | 99.99th=[ 393] 00:26:25.483 bw ( KiB/s): min= 176, max= 1920, per=4.25%, avg=1041.60, stdev=831.24, samples=20 00:26:25.483 iops : min= 44, max= 480, avg=260.40, stdev=207.81, samples=20 00:26:25.483 lat (msec) : 50=87.86%, 100=0.08%, 250=2.67%, 500=9.39% 00:26:25.483 cpu : usr=97.54%, sys=1.73%, ctx=129, majf=0, minf=31 00:26:25.483 IO depths : 1=5.5%, 2=11.2%, 4=23.4%, 8=52.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename1: (groupid=0, jobs=1): err= 0: pid=2403961: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=263, BW=1053KiB/s (1078kB/s)(10.4MiB/10124msec) 00:26:25.483 slat (nsec): min=6957, max=91309, avg=34827.23, stdev=17426.01 00:26:25.483 clat (msec): min=13, max=402, avg=60.52, stdev=76.45 00:26:25.483 lat (msec): min=13, max=402, avg=60.55, stdev=76.44 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 236], 95.00th=[ 264], 00:26:25.483 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 405], 00:26:25.483 | 99.99th=[ 405] 00:26:25.483 bw ( KiB/s): min= 176, max= 2080, per=4.32%, avg=1059.20, stdev=850.83, samples=20 00:26:25.483 iops : min= 44, max= 520, avg=264.80, stdev=212.71, samples=20 00:26:25.483 lat (msec) : 20=0.86%, 50=87.05%, 250=3.30%, 500=8.78% 00:26:25.483 cpu : usr=96.41%, sys=2.01%, ctx=30, majf=0, minf=41 00:26:25.483 IO depths : 1=4.9%, 2=10.4%, 4=22.5%, 8=54.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename1: (groupid=0, jobs=1): err= 0: pid=2403962: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.1MiB/10104msec) 00:26:25.483 slat (usec): min=8, max=142, avg=37.25, stdev=24.26 00:26:25.483 clat (msec): min=17, max=401, avg=62.19, stdev=79.10 00:26:25.483 lat (msec): min=17, max=401, avg=62.23, stdev=79.09 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 243], 95.00th=[ 271], 00:26:25.483 | 99.00th=[ 351], 99.50th=[ 372], 99.90th=[ 393], 99.95th=[ 401], 00:26:25.483 | 99.99th=[ 401] 00:26:25.483 bw ( KiB/s): min= 144, max= 1968, per=4.20%, avg=1029.80, stdev=830.52, samples=20 00:26:25.483 iops : min= 36, max= 492, avg=257.45, stdev=207.63, samples=20 00:26:25.483 lat (msec) : 20=0.42%, 50=86.46%, 100=1.16%, 250=3.32%, 500=8.64% 00:26:25.483 cpu : usr=97.70%, sys=1.72%, ctx=23, majf=0, minf=26 00:26:25.483 IO depths : 1=0.5%, 2=1.7%, 4=5.3%, 8=75.9%, 16=16.6%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=90.3%, 8=8.3%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename1: (groupid=0, jobs=1): err= 0: pid=2403963: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=246, BW=987KiB/s (1010kB/s)(9952KiB/10086msec) 00:26:25.483 slat (usec): min=8, max=101, avg=35.33, stdev=16.09 00:26:25.483 clat (msec): min=21, max=506, avg=64.57, stdev=99.48 00:26:25.483 lat (msec): min=21, max=506, avg=64.61, stdev=99.48 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 54], 95.00th=[ 372], 00:26:25.483 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 435], 99.95th=[ 506], 00:26:25.483 | 99.99th=[ 506] 00:26:25.483 bw ( KiB/s): min= 128, max= 2000, per=4.03%, avg=988.80, stdev=863.28, samples=20 00:26:25.483 iops : min= 32, max= 500, avg=247.20, stdev=215.82, samples=20 00:26:25.483 lat (msec) : 50=89.95%, 100=1.05%, 250=0.56%, 500=8.36%, 750=0.08% 00:26:25.483 cpu : usr=96.24%, sys=2.27%, ctx=84, majf=0, minf=30 00:26:25.483 IO depths : 1=2.3%, 2=8.0%, 4=23.1%, 8=56.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 issued rwts: total=2488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.483 filename2: (groupid=0, jobs=1): err= 0: pid=2403964: Tue Jul 16 00:30:15 2024 00:26:25.483 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.3MiB/10120msec) 00:26:25.483 slat (nsec): min=6451, max=74725, avg=29950.27, stdev=11332.61 00:26:25.483 clat (msec): min=15, max=394, avg=61.07, stdev=73.63 00:26:25.483 lat (msec): min=15, max=394, avg=61.10, stdev=73.63 00:26:25.483 clat percentiles (msec): 00:26:25.483 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.483 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.483 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 243], 95.00th=[ 264], 00:26:25.483 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 397], 00:26:25.483 | 99.99th=[ 397] 00:26:25.483 bw ( KiB/s): min= 240, max= 2048, per=4.27%, avg=1048.95, stdev=829.03, samples=20 00:26:25.483 iops : min= 60, max= 512, avg=262.20, stdev=207.22, samples=20 00:26:25.483 lat (msec) : 20=0.61%, 50=86.73%, 250=4.70%, 500=7.96% 00:26:25.483 cpu : usr=98.22%, sys=1.36%, ctx=16, majf=0, minf=31 00:26:25.483 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.483 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 filename2: (groupid=0, jobs=1): err= 0: pid=2403965: Tue Jul 16 00:30:15 2024 00:26:25.484 read: IOPS=253, BW=1014KiB/s (1038kB/s)(9.98MiB/10082msec) 00:26:25.484 slat (usec): min=8, max=106, avg=38.52, stdev=25.78 00:26:25.484 clat (msec): min=18, max=521, avg=62.91, stdev=97.13 00:26:25.484 lat (msec): min=18, max=521, avg=62.94, stdev=97.13 00:26:25.484 clat percentiles (msec): 00:26:25.484 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 30], 00:26:25.484 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.484 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 77], 95.00th=[ 376], 00:26:25.484 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 510], 99.95th=[ 523], 00:26:25.484 | 99.99th=[ 523] 00:26:25.484 bw ( KiB/s): min= 128, max= 2064, per=4.14%, avg=1016.00, stdev=886.21, samples=20 00:26:25.484 iops : min= 32, max= 516, avg=254.00, stdev=221.55, samples=20 00:26:25.484 lat (msec) : 20=0.59%, 50=89.01%, 100=1.25%, 250=1.02%, 500=7.90% 00:26:25.484 lat (msec) : 750=0.23% 00:26:25.484 cpu : usr=98.17%, sys=1.41%, ctx=21, majf=0, minf=36 00:26:25.484 IO depths : 1=0.7%, 2=2.3%, 4=8.1%, 8=74.4%, 16=14.5%, 32=0.0%, >=64=0.0% 00:26:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 complete : 0=0.0%, 4=90.2%, 8=6.8%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 filename2: (groupid=0, jobs=1): err= 0: pid=2403966: Tue Jul 16 00:30:15 2024 00:26:25.484 read: IOPS=255, BW=1022KiB/s (1047kB/s)(10.1MiB/10101msec) 00:26:25.484 slat (usec): min=5, max=121, avg=64.48, stdev=25.31 00:26:25.484 clat (msec): min=30, max=423, avg=62.02, stdev=79.67 00:26:25.484 lat (msec): min=30, max=423, avg=62.08, stdev=79.65 00:26:25.484 clat percentiles (msec): 00:26:25.484 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:26:25.484 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:25.484 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 247], 95.00th=[ 262], 00:26:25.484 | 99.00th=[ 376], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 426], 00:26:25.484 | 99.99th=[ 426] 00:26:25.484 bw ( KiB/s): min= 128, max= 1920, per=4.18%, avg=1025.80, stdev=826.19, samples=20 00:26:25.484 iops : min= 32, max= 480, avg=256.45, stdev=206.55, samples=20 00:26:25.484 lat (msec) : 50=87.37%, 100=0.62%, 250=2.87%, 500=9.14% 00:26:25.484 cpu : usr=98.10%, sys=1.38%, ctx=23, majf=0, minf=33 00:26:25.484 IO depths : 1=5.7%, 2=11.5%, 4=23.8%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 filename2: (groupid=0, jobs=1): err= 0: pid=2403967: Tue Jul 16 00:30:15 2024 00:26:25.484 read: IOPS=258, BW=1032KiB/s (1057kB/s)(10.2MiB/10121msec) 00:26:25.484 slat (usec): min=6, max=176, avg=25.38, stdev=14.91 00:26:25.484 clat (msec): min=16, max=507, avg=61.77, stdev=79.92 00:26:25.484 lat (msec): min=16, max=507, avg=61.80, stdev=79.92 00:26:25.484 clat percentiles (msec): 00:26:25.484 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.484 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.484 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 243], 95.00th=[ 264], 00:26:25.484 | 99.00th=[ 380], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 506], 00:26:25.484 | 99.99th=[ 506] 00:26:25.484 bw ( KiB/s): min= 128, max= 2032, per=4.23%, avg=1038.40, stdev=838.88, samples=20 00:26:25.484 iops : min= 32, max= 508, avg=259.60, stdev=209.72, samples=20 00:26:25.484 lat (msec) : 20=0.23%, 50=87.98%, 250=3.06%, 500=8.65%, 750=0.08% 00:26:25.484 cpu : usr=97.48%, sys=1.85%, ctx=20, majf=0, minf=34 00:26:25.484 IO depths : 1=3.4%, 2=9.1%, 4=23.9%, 8=54.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 filename2: (groupid=0, jobs=1): err= 0: pid=2403968: Tue Jul 16 00:30:15 2024 00:26:25.484 read: IOPS=247, BW=990KiB/s (1014kB/s)(9984KiB/10085msec) 00:26:25.484 slat (usec): min=7, max=117, avg=47.94, stdev=26.59 00:26:25.484 clat (msec): min=23, max=517, avg=64.21, stdev=99.43 00:26:25.484 lat (msec): min=23, max=517, avg=64.26, stdev=99.43 00:26:25.484 clat percentiles (msec): 00:26:25.484 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.484 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:25.484 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 368], 00:26:25.484 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 518], 99.95th=[ 518], 00:26:25.484 | 99.99th=[ 518] 00:26:25.484 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=992.15, stdev=861.79, samples=20 00:26:25.484 iops : min= 32, max= 480, avg=248.00, stdev=215.42, samples=20 00:26:25.484 lat (msec) : 50=90.38%, 100=0.64%, 500=8.65%, 750=0.32% 00:26:25.484 cpu : usr=96.35%, sys=2.32%, ctx=78, majf=0, minf=27 00:26:25.484 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 filename2: (groupid=0, jobs=1): err= 0: pid=2403969: Tue Jul 16 00:30:15 2024 00:26:25.484 read: IOPS=261, BW=1046KiB/s (1071kB/s)(10.3MiB/10123msec) 00:26:25.484 slat (nsec): min=8355, max=99654, avg=25644.37, stdev=22322.88 00:26:25.484 clat (msec): min=13, max=381, avg=60.86, stdev=74.72 00:26:25.484 lat (msec): min=13, max=381, avg=60.89, stdev=74.71 00:26:25.484 clat percentiles (msec): 00:26:25.484 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.484 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.484 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 245], 95.00th=[ 257], 00:26:25.484 | 99.00th=[ 275], 99.50th=[ 326], 99.90th=[ 380], 99.95th=[ 380], 00:26:25.484 | 99.99th=[ 380] 00:26:25.484 bw ( KiB/s): min= 176, max= 2048, per=4.29%, avg=1052.00, stdev=838.80, samples=20 00:26:25.484 iops : min= 44, max= 512, avg=263.00, stdev=209.70, samples=20 00:26:25.484 lat (msec) : 20=1.21%, 50=86.47%, 250=3.10%, 500=9.22% 00:26:25.484 cpu : usr=98.14%, sys=1.45%, ctx=14, majf=0, minf=55 00:26:25.484 IO depths : 1=5.5%, 2=11.1%, 4=23.3%, 8=53.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 filename2: (groupid=0, jobs=1): err= 0: pid=2403970: Tue Jul 16 00:30:15 2024 00:26:25.484 read: IOPS=255, BW=1023KiB/s (1047kB/s)(10.1MiB/10092msec) 00:26:25.484 slat (nsec): min=8370, max=80758, avg=31132.45, stdev=17044.20 00:26:25.484 clat (msec): min=20, max=439, avg=62.33, stdev=79.50 00:26:25.484 lat (msec): min=21, max=439, avg=62.36, stdev=79.49 00:26:25.484 clat percentiles (msec): 00:26:25.484 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.484 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.484 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 243], 95.00th=[ 271], 00:26:25.484 | 99.00th=[ 368], 99.50th=[ 397], 99.90th=[ 401], 99.95th=[ 439], 00:26:25.484 | 99.99th=[ 439] 00:26:25.484 bw ( KiB/s): min= 128, max= 1968, per=4.18%, avg=1025.60, stdev=832.37, samples=20 00:26:25.484 iops : min= 32, max= 492, avg=256.40, stdev=208.09, samples=20 00:26:25.484 lat (msec) : 50=86.98%, 100=1.24%, 250=3.57%, 500=8.22% 00:26:25.484 cpu : usr=98.05%, sys=1.53%, ctx=13, majf=0, minf=41 00:26:25.484 IO depths : 1=1.9%, 2=7.9%, 4=24.0%, 8=55.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 filename2: (groupid=0, jobs=1): err= 0: pid=2403971: Tue Jul 16 00:30:15 2024 00:26:25.484 read: IOPS=257, BW=1032KiB/s (1056kB/s)(10.2MiB/10104msec) 00:26:25.484 slat (usec): min=8, max=111, avg=40.26, stdev=27.25 00:26:25.484 clat (msec): min=31, max=402, avg=61.63, stdev=75.89 00:26:25.484 lat (msec): min=31, max=403, avg=61.67, stdev=75.88 00:26:25.484 clat percentiles (msec): 00:26:25.484 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:25.484 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:25.484 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 243], 95.00th=[ 264], 00:26:25.484 | 99.00th=[ 279], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 405], 00:26:25.484 | 99.99th=[ 405] 00:26:25.484 bw ( KiB/s): min= 144, max= 2035, per=4.22%, avg=1035.35, stdev=830.33, samples=20 00:26:25.484 iops : min= 36, max= 508, avg=258.80, stdev=207.53, samples=20 00:26:25.484 lat (msec) : 50=86.57%, 100=1.23%, 250=4.53%, 500=7.67% 00:26:25.484 cpu : usr=97.74%, sys=1.57%, ctx=32, majf=0, minf=27 00:26:25.484 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.484 issued rwts: total=2606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.484 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:25.484 00:26:25.484 Run status group 0 (all jobs): 00:26:25.484 READ: bw=23.9MiB/s (25.1MB/s), 986KiB/s-1053KiB/s (1010kB/s-1078kB/s), io=242MiB (254MB), run=10078-10124msec 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:25.484 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 bdev_null0 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 [2024-07-16 00:30:16.030421] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 bdev_null1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:25.485 { 00:26:25.485 "params": { 00:26:25.485 "name": "Nvme$subsystem", 00:26:25.485 "trtype": "$TEST_TRANSPORT", 00:26:25.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:25.485 "adrfam": "ipv4", 00:26:25.485 "trsvcid": "$NVMF_PORT", 00:26:25.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:25.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:25.485 "hdgst": ${hdgst:-false}, 00:26:25.485 "ddgst": ${ddgst:-false} 00:26:25.485 }, 00:26:25.485 "method": "bdev_nvme_attach_controller" 00:26:25.485 } 00:26:25.485 EOF 00:26:25.485 )") 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:25.485 { 00:26:25.485 "params": { 00:26:25.485 "name": "Nvme$subsystem", 00:26:25.485 "trtype": "$TEST_TRANSPORT", 00:26:25.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:25.485 "adrfam": "ipv4", 00:26:25.485 "trsvcid": "$NVMF_PORT", 00:26:25.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:25.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:25.485 "hdgst": ${hdgst:-false}, 00:26:25.485 "ddgst": ${ddgst:-false} 00:26:25.485 }, 00:26:25.485 "method": "bdev_nvme_attach_controller" 00:26:25.485 } 00:26:25.485 EOF 00:26:25.485 )") 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:25.485 00:30:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:25.485 "params": { 00:26:25.485 "name": "Nvme0", 00:26:25.485 "trtype": "tcp", 00:26:25.485 "traddr": "10.0.0.2", 00:26:25.485 "adrfam": "ipv4", 00:26:25.485 "trsvcid": "4420", 00:26:25.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:25.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:25.485 "hdgst": false, 00:26:25.485 "ddgst": false 00:26:25.485 }, 00:26:25.485 "method": "bdev_nvme_attach_controller" 00:26:25.485 },{ 00:26:25.485 "params": { 00:26:25.485 "name": "Nvme1", 00:26:25.485 "trtype": "tcp", 00:26:25.485 "traddr": "10.0.0.2", 00:26:25.485 "adrfam": "ipv4", 00:26:25.485 "trsvcid": "4420", 00:26:25.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:25.486 "hdgst": false, 00:26:25.486 "ddgst": false 00:26:25.486 }, 00:26:25.486 "method": "bdev_nvme_attach_controller" 00:26:25.486 }' 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:25.486 00:30:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:25.486 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:25.486 ... 00:26:25.486 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:25.486 ... 00:26:25.486 fio-3.35 00:26:25.486 Starting 4 threads 00:26:25.486 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.742 00:26:30.742 filename0: (groupid=0, jobs=1): err= 0: pid=2405857: Tue Jul 16 00:30:22 2024 00:26:30.742 read: IOPS=1784, BW=13.9MiB/s (14.6MB/s)(69.7MiB/5001msec) 00:26:30.742 slat (nsec): min=5181, max=61869, avg=12362.89, stdev=6341.99 00:26:30.742 clat (usec): min=1276, max=7791, avg=4445.63, stdev=756.64 00:26:30.742 lat (usec): min=1283, max=7799, avg=4457.99, stdev=755.84 00:26:30.742 clat percentiles (usec): 00:26:30.742 | 1.00th=[ 3195], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3982], 00:26:30.742 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:30.742 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5932], 95.00th=[ 6194], 00:26:30.742 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7439], 99.95th=[ 7635], 00:26:30.742 | 99.99th=[ 7767] 00:26:30.742 bw ( KiB/s): min=13744, max=14896, per=24.80%, avg=14250.22, stdev=339.74, samples=9 00:26:30.742 iops : min= 1718, max= 1862, avg=1781.22, stdev=42.48, samples=9 00:26:30.742 lat (msec) : 2=0.03%, 4=21.46%, 10=78.51% 00:26:30.742 cpu : usr=94.26%, sys=5.26%, ctx=10, majf=0, minf=52 00:26:30.742 IO depths : 1=0.1%, 2=2.1%, 4=69.7%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.742 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.742 issued rwts: total=8924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.743 filename0: (groupid=0, jobs=1): err= 0: pid=2405858: Tue Jul 16 00:30:22 2024 00:26:30.743 read: IOPS=1810, BW=14.1MiB/s (14.8MB/s)(70.8MiB/5004msec) 00:26:30.743 slat (nsec): min=5393, max=67201, avg=11561.41, stdev=5568.23 00:26:30.743 clat (usec): min=1890, max=7516, avg=4383.80, stdev=732.94 00:26:30.743 lat (usec): min=1903, max=7535, avg=4395.36, stdev=732.51 00:26:30.743 clat percentiles (usec): 00:26:30.743 | 1.00th=[ 3032], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 3884], 00:26:30.743 | 30.00th=[ 4015], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4359], 00:26:30.743 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5473], 95.00th=[ 6128], 00:26:30.743 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7439], 00:26:30.743 | 99.99th=[ 7504] 00:26:30.743 bw ( KiB/s): min=14256, max=14960, per=25.20%, avg=14480.00, stdev=209.97, samples=10 00:26:30.743 iops : min= 1782, max= 1870, avg=1810.00, stdev=26.25, samples=10 00:26:30.743 lat (msec) : 2=0.01%, 4=27.32%, 10=72.67% 00:26:30.743 cpu : usr=94.14%, sys=5.36%, ctx=10, majf=0, minf=43 00:26:30.743 IO depths : 1=0.1%, 2=2.1%, 4=68.7%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.743 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.743 issued rwts: total=9058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.743 filename1: (groupid=0, jobs=1): err= 0: pid=2405859: Tue Jul 16 00:30:22 2024 00:26:30.743 read: IOPS=1805, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5003msec) 00:26:30.743 slat (nsec): min=5260, max=73944, avg=15592.80, stdev=7907.51 00:26:30.743 clat (usec): min=2313, max=7438, avg=4384.46, stdev=703.21 00:26:30.743 lat (usec): min=2327, max=7447, avg=4400.06, stdev=702.63 00:26:30.743 clat percentiles (usec): 00:26:30.743 | 1.00th=[ 3228], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3949], 00:26:30.743 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:30.743 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5473], 95.00th=[ 6063], 00:26:30.743 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7242], 99.95th=[ 7373], 00:26:30.743 | 99.99th=[ 7439] 00:26:30.743 bw ( KiB/s): min=13792, max=15136, per=25.14%, avg=14444.50, stdev=442.89, samples=10 00:26:30.743 iops : min= 1724, max= 1892, avg=1805.50, stdev=55.32, samples=10 00:26:30.743 lat (msec) : 4=23.89%, 10=76.11% 00:26:30.743 cpu : usr=94.98%, sys=4.48%, ctx=15, majf=0, minf=41 00:26:30.743 IO depths : 1=0.2%, 2=2.2%, 4=67.9%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.743 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.743 issued rwts: total=9034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.743 filename1: (groupid=0, jobs=1): err= 0: pid=2405860: Tue Jul 16 00:30:22 2024 00:26:30.743 read: IOPS=1784, BW=13.9MiB/s (14.6MB/s)(69.7MiB/5002msec) 00:26:30.743 slat (nsec): min=5786, max=55769, avg=12410.98, stdev=6369.77 00:26:30.743 clat (usec): min=1522, max=7507, avg=4445.05, stdev=772.05 00:26:30.743 lat (usec): min=1530, max=7515, avg=4457.46, stdev=770.90 00:26:30.743 clat percentiles (usec): 00:26:30.743 | 1.00th=[ 3097], 5.00th=[ 3654], 10.00th=[ 3851], 20.00th=[ 3982], 00:26:30.743 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4359], 00:26:30.743 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5997], 95.00th=[ 6259], 00:26:30.743 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 7439], 00:26:30.743 | 99.99th=[ 7504] 00:26:30.743 bw ( KiB/s): min=13696, max=14944, per=24.81%, avg=14254.22, stdev=354.12, samples=9 00:26:30.743 iops : min= 1712, max= 1868, avg=1781.78, stdev=44.27, samples=9 00:26:30.743 lat (msec) : 2=0.03%, 4=21.58%, 10=78.39% 00:26:30.743 cpu : usr=94.78%, sys=4.74%, ctx=7, majf=0, minf=44 00:26:30.743 IO depths : 1=0.1%, 2=2.2%, 4=70.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.743 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.743 issued rwts: total=8925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.743 00:26:30.743 Run status group 0 (all jobs): 00:26:30.743 READ: bw=56.1MiB/s (58.8MB/s), 13.9MiB/s-14.1MiB/s (14.6MB/s-14.8MB/s), io=281MiB (294MB), run=5001-5004msec 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.019 00:26:31.019 real 0m24.585s 00:26:31.019 user 4m33.056s 00:26:31.019 sys 0m7.687s 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:31.019 00:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.019 ************************************ 00:26:31.019 END TEST fio_dif_rand_params 00:26:31.019 ************************************ 00:26:31.019 00:30:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:31.019 00:30:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:31.019 00:30:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:31.019 00:30:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.019 00:30:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:31.019 ************************************ 00:26:31.019 START TEST fio_dif_digest 00:26:31.019 ************************************ 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:31.019 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.020 bdev_null0 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.020 [2024-07-16 00:30:22.476427] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:31.020 { 00:26:31.020 "params": { 00:26:31.020 "name": "Nvme$subsystem", 00:26:31.020 "trtype": "$TEST_TRANSPORT", 00:26:31.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.020 "adrfam": "ipv4", 00:26:31.020 "trsvcid": "$NVMF_PORT", 00:26:31.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.020 "hdgst": ${hdgst:-false}, 00:26:31.020 "ddgst": ${ddgst:-false} 00:26:31.020 }, 00:26:31.020 "method": "bdev_nvme_attach_controller" 00:26:31.020 } 00:26:31.020 EOF 00:26:31.020 )") 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:31.020 "params": { 00:26:31.020 "name": "Nvme0", 00:26:31.020 "trtype": "tcp", 00:26:31.020 "traddr": "10.0.0.2", 00:26:31.020 "adrfam": "ipv4", 00:26:31.020 "trsvcid": "4420", 00:26:31.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:31.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:31.020 "hdgst": true, 00:26:31.020 "ddgst": true 00:26:31.020 }, 00:26:31.020 "method": "bdev_nvme_attach_controller" 00:26:31.020 }' 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:31.020 00:30:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.285 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:31.285 ... 00:26:31.285 fio-3.35 00:26:31.285 Starting 3 threads 00:26:31.285 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.475 00:26:43.475 filename0: (groupid=0, jobs=1): err= 0: pid=2406625: Tue Jul 16 00:30:33 2024 00:26:43.475 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(241MiB/10047msec) 00:26:43.475 slat (nsec): min=7638, max=45865, avg=15245.10, stdev=4317.59 00:26:43.475 clat (usec): min=6717, max=59776, avg=15583.73, stdev=5919.38 00:26:43.475 lat (usec): min=6730, max=59797, avg=15598.97, stdev=5919.44 00:26:43.475 clat percentiles (usec): 00:26:43.475 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[10814], 20.00th=[13042], 00:26:43.475 | 30.00th=[14353], 40.00th=[14877], 50.00th=[15401], 60.00th=[15795], 00:26:43.475 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17433], 95.00th=[18220], 00:26:43.475 | 99.00th=[55313], 99.50th=[57410], 99.90th=[58983], 99.95th=[60031], 00:26:43.475 | 99.99th=[60031] 00:26:43.475 bw ( KiB/s): min=19712, max=28160, per=33.35%, avg=24654.90, stdev=2192.41, samples=20 00:26:43.475 iops : min= 154, max= 220, avg=192.60, stdev=17.16, samples=20 00:26:43.475 lat (msec) : 10=3.53%, 20=94.50%, 50=0.26%, 100=1.71% 00:26:43.475 cpu : usr=91.03%, sys=8.48%, ctx=31, majf=0, minf=157 00:26:43.475 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.475 issued rwts: total=1929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.475 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.476 filename0: (groupid=0, jobs=1): err= 0: pid=2406626: Tue Jul 16 00:30:33 2024 00:26:43.476 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(279MiB/10046msec) 00:26:43.476 slat (usec): min=7, max=132, avg=16.81, stdev= 6.14 00:26:43.476 clat (usec): min=6216, max=56057, avg=13487.13, stdev=5597.30 00:26:43.476 lat (usec): min=6230, max=56070, avg=13503.94, stdev=5597.03 00:26:43.476 clat percentiles (usec): 00:26:43.476 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11469], 00:26:43.476 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13173], 60.00th=[13435], 00:26:43.476 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:26:43.476 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:26:43.476 | 99.99th=[55837] 00:26:43.476 bw ( KiB/s): min=23808, max=33536, per=38.54%, avg=28492.80, stdev=2403.07, samples=20 00:26:43.476 iops : min= 186, max= 262, avg=222.60, stdev=18.77, samples=20 00:26:43.476 lat (msec) : 10=7.00%, 20=91.29%, 50=0.04%, 100=1.66% 00:26:43.476 cpu : usr=91.24%, sys=8.24%, ctx=21, majf=0, minf=195 00:26:43.476 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.476 issued rwts: total=2228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.476 filename0: (groupid=0, jobs=1): err= 0: pid=2406627: Tue Jul 16 00:30:33 2024 00:26:43.476 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(206MiB/10005msec) 00:26:43.476 slat (nsec): min=7649, max=44305, avg=15675.77, stdev=4411.20 00:26:43.476 clat (usec): min=8046, max=97956, avg=18213.87, stdev=9900.91 00:26:43.476 lat (usec): min=8059, max=97971, avg=18229.54, stdev=9900.91 00:26:43.476 clat percentiles (usec): 00:26:43.476 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[12911], 20.00th=[14877], 00:26:43.476 | 30.00th=[15401], 40.00th=[15926], 50.00th=[16319], 60.00th=[16909], 00:26:43.476 | 70.00th=[17171], 80.00th=[17957], 90.00th=[18744], 95.00th=[55313], 00:26:43.476 | 99.00th=[58983], 99.50th=[59507], 99.90th=[61080], 99.95th=[98042], 00:26:43.476 | 99.99th=[98042] 00:26:43.476 bw ( KiB/s): min=16640, max=25856, per=28.46%, avg=21043.20, stdev=2689.57, samples=20 00:26:43.476 iops : min= 130, max= 202, avg=164.40, stdev=21.01, samples=20 00:26:43.476 lat (msec) : 10=3.16%, 20=90.34%, 50=0.91%, 100=5.59% 00:26:43.476 cpu : usr=92.28%, sys=7.24%, ctx=16, majf=0, minf=168 00:26:43.476 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.476 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.476 00:26:43.476 Run status group 0 (all jobs): 00:26:43.476 READ: bw=72.2MiB/s (75.7MB/s), 20.6MiB/s-27.7MiB/s (21.6MB/s-29.1MB/s), io=725MiB (761MB), run=10005-10047msec 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.476 00:26:43.476 real 0m11.149s 00:26:43.476 user 0m28.695s 00:26:43.476 sys 0m2.679s 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:43.476 00:30:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:43.476 ************************************ 00:26:43.476 END TEST fio_dif_digest 00:26:43.476 ************************************ 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:43.476 00:30:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:43.476 00:30:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:43.476 rmmod nvme_tcp 00:26:43.476 rmmod nvme_fabrics 00:26:43.476 rmmod nvme_keyring 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2399796 ']' 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2399796 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2399796 ']' 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2399796 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2399796 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2399796' 00:26:43.476 killing process with pid 2399796 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2399796 00:26:43.476 00:30:33 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2399796 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:43.476 00:30:33 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:43.476 Waiting for block devices as requested 00:26:43.751 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:43.751 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:44.009 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:44.010 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:44.010 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:44.010 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:44.267 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:44.267 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:44.267 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:44.267 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:44.524 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:44.524 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:44.524 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:44.524 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:44.781 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:44.781 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:44.781 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:45.040 00:30:36 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.040 00:30:36 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.040 00:30:36 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.040 00:30:36 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.040 00:30:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.040 00:30:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:45.040 00:30:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.977 00:30:38 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:46.977 00:26:46.977 real 1m8.111s 00:26:46.977 user 6m30.830s 00:26:46.977 sys 0m20.035s 00:26:46.977 00:30:38 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.977 00:30:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.977 ************************************ 00:26:46.977 END TEST nvmf_dif 00:26:46.977 ************************************ 00:26:46.977 00:30:38 -- common/autotest_common.sh@1142 -- # return 0 00:26:46.977 00:30:38 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:46.977 00:30:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:46.977 00:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.977 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:26:46.977 ************************************ 00:26:46.977 START TEST nvmf_abort_qd_sizes 00:26:46.977 ************************************ 00:26:46.977 00:30:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:46.977 * Looking for test storage... 00:26:47.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.236 00:30:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:49.135 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:49.135 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.135 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:49.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:49.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:26:49.136 00:26:49.136 --- 10.0.0.2 ping statistics --- 00:26:49.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.136 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:26:49.136 00:26:49.136 --- 10.0.0.1 ping statistics --- 00:26:49.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.136 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:49.136 00:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:50.069 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:50.327 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:50.327 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:50.327 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:50.327 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:50.327 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:50.327 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:50.327 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:50.327 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:51.263 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:51.263 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.263 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:51.263 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:51.263 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.263 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:51.263 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2411527 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2411527 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2411527 ']' 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.521 00:30:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 [2024-07-16 00:30:42.948079] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:26:51.522 [2024-07-16 00:30:42.948168] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.522 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.522 [2024-07-16 00:30:43.016705] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.522 [2024-07-16 00:30:43.134382] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.522 [2024-07-16 00:30:43.134449] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.522 [2024-07-16 00:30:43.134465] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.522 [2024-07-16 00:30:43.134478] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.522 [2024-07-16 00:30:43.134490] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.522 [2024-07-16 00:30:43.134577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.522 [2024-07-16 00:30:43.134645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.522 [2024-07-16 00:30:43.134746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.522 [2024-07-16 00:30:43.134749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.454 00:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:52.454 ************************************ 00:26:52.454 START TEST spdk_target_abort 00:26:52.454 ************************************ 00:26:52.454 00:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:52.454 00:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:52.454 00:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:52.454 00:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.454 00:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:55.748 spdk_targetn1 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:55.748 [2024-07-16 00:30:46.778466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:55.748 [2024-07-16 00:30:46.810736] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:55.748 00:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:55.748 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.049 Initializing NVMe Controllers 00:26:59.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:59.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:59.049 Initialization complete. Launching workers. 00:26:59.049 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9659, failed: 0 00:26:59.049 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1275, failed to submit 8384 00:26:59.049 success 817, unsuccess 458, failed 0 00:26:59.049 00:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:59.049 00:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:59.049 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.325 Initializing NVMe Controllers 00:27:02.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:02.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:02.326 Initialization complete. Launching workers. 00:27:02.326 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8829, failed: 0 00:27:02.326 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7586 00:27:02.326 success 332, unsuccess 911, failed 0 00:27:02.326 00:30:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:02.326 00:30:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:02.326 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.850 Initializing NVMe Controllers 00:27:04.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:04.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:04.850 Initialization complete. Launching workers. 00:27:04.850 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30453, failed: 0 00:27:04.850 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2709, failed to submit 27744 00:27:04.850 success 490, unsuccess 2219, failed 0 00:27:04.850 00:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:04.850 00:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.850 00:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.850 00:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.850 00:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:04.850 00:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.850 00:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2411527 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2411527 ']' 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2411527 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2411527 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2411527' 00:27:06.219 killing process with pid 2411527 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2411527 00:27:06.219 00:30:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2411527 00:27:06.478 00:27:06.478 real 0m14.144s 00:27:06.478 user 0m56.020s 00:27:06.478 sys 0m2.510s 00:27:06.478 00:30:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:06.478 00:30:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.478 ************************************ 00:27:06.478 END TEST spdk_target_abort 00:27:06.478 ************************************ 00:27:06.737 00:30:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:06.737 00:30:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:06.737 00:30:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:06.737 00:30:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.737 00:30:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:06.737 ************************************ 00:27:06.737 START TEST kernel_target_abort 00:27:06.737 ************************************ 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:06.737 00:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:07.674 Waiting for block devices as requested 00:27:07.674 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:07.933 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:07.933 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:08.191 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:08.191 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:08.191 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:08.191 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:08.450 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:08.450 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:08.450 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:08.450 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:08.709 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:08.709 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:08.709 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:08.967 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:08.967 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:08.967 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:09.225 No valid GPT data, bailing 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:09.225 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:09.226 00:27:09.226 Discovery Log Number of Records 2, Generation counter 2 00:27:09.226 =====Discovery Log Entry 0====== 00:27:09.226 trtype: tcp 00:27:09.226 adrfam: ipv4 00:27:09.226 subtype: current discovery subsystem 00:27:09.226 treq: not specified, sq flow control disable supported 00:27:09.226 portid: 1 00:27:09.226 trsvcid: 4420 00:27:09.226 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:09.226 traddr: 10.0.0.1 00:27:09.226 eflags: none 00:27:09.226 sectype: none 00:27:09.226 =====Discovery Log Entry 1====== 00:27:09.226 trtype: tcp 00:27:09.226 adrfam: ipv4 00:27:09.226 subtype: nvme subsystem 00:27:09.226 treq: not specified, sq flow control disable supported 00:27:09.226 portid: 1 00:27:09.226 trsvcid: 4420 00:27:09.226 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:09.226 traddr: 10.0.0.1 00:27:09.226 eflags: none 00:27:09.226 sectype: none 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:09.226 00:31:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:09.226 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.501 Initializing NVMe Controllers 00:27:12.501 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:12.501 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:12.501 Initialization complete. Launching workers. 00:27:12.501 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29564, failed: 0 00:27:12.501 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29564, failed to submit 0 00:27:12.501 success 0, unsuccess 29564, failed 0 00:27:12.501 00:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:12.501 00:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.501 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.805 Initializing NVMe Controllers 00:27:15.805 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:15.805 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:15.805 Initialization complete. Launching workers. 00:27:15.805 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57037, failed: 0 00:27:15.805 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14350, failed to submit 42687 00:27:15.805 success 0, unsuccess 14350, failed 0 00:27:15.805 00:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:15.805 00:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:15.805 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.082 Initializing NVMe Controllers 00:27:19.082 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:19.082 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:19.082 Initialization complete. Launching workers. 00:27:19.082 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55791, failed: 0 00:27:19.082 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13922, failed to submit 41869 00:27:19.082 success 0, unsuccess 13922, failed 0 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:19.082 00:31:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:19.647 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:19.647 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:19.647 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:19.647 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:19.647 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:19.918 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:19.918 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:19.918 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:19.918 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:20.854 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:20.854 00:27:20.854 real 0m14.246s 00:27:20.854 user 0m4.670s 00:27:20.854 sys 0m3.338s 00:27:20.854 00:31:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.854 00:31:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.854 ************************************ 00:27:20.854 END TEST kernel_target_abort 00:27:20.854 ************************************ 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:20.854 rmmod nvme_tcp 00:27:20.854 rmmod nvme_fabrics 00:27:20.854 rmmod nvme_keyring 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2411527 ']' 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2411527 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2411527 ']' 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2411527 00:27:20.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2411527) - No such process 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2411527 is not found' 00:27:20.854 Process with pid 2411527 is not found 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:20.854 00:31:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:22.227 Waiting for block devices as requested 00:27:22.227 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:22.227 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:22.227 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:22.227 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:22.485 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:22.485 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:22.485 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:22.485 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:22.743 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:22.743 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:22.743 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:22.743 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:23.001 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:23.001 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:23.001 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:23.001 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:23.258 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:23.258 00:31:14 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.258 00:31:14 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.258 00:31:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.258 00:31:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.258 00:31:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.258 00:31:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:23.258 00:31:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.791 00:31:16 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.791 00:27:25.791 real 0m38.294s 00:27:25.791 user 1m2.952s 00:27:25.791 sys 0m9.073s 00:27:25.791 00:31:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.791 00:31:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:25.791 ************************************ 00:27:25.791 END TEST nvmf_abort_qd_sizes 00:27:25.791 ************************************ 00:27:25.791 00:31:16 -- common/autotest_common.sh@1142 -- # return 0 00:27:25.791 00:31:16 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:25.791 00:31:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:25.791 00:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.791 00:31:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.791 ************************************ 00:27:25.791 START TEST keyring_file 00:27:25.791 ************************************ 00:27:25.791 00:31:16 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:25.791 * Looking for test storage... 00:27:25.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:25.791 00:31:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.791 00:31:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.791 00:31:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.791 00:31:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.791 00:31:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.791 00:31:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.791 00:31:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.791 00:31:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:25.791 00:31:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:25.791 00:31:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:25.791 00:31:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:25.791 00:31:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:25.791 00:31:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:25.791 00:31:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:25.791 00:31:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Hd6izPLolG 00:27:25.791 00:31:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:25.791 00:31:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Hd6izPLolG 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Hd6izPLolG 00:27:25.792 00:31:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Hd6izPLolG 00:27:25.792 00:31:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OZlqVk89Hk 00:27:25.792 00:31:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:25.792 00:31:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:25.792 00:31:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OZlqVk89Hk 00:27:25.792 00:31:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OZlqVk89Hk 00:27:25.792 00:31:17 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.OZlqVk89Hk 00:27:25.792 00:31:17 keyring_file -- keyring/file.sh@30 -- # tgtpid=2417419 00:27:25.792 00:31:17 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:25.792 00:31:17 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2417419 00:27:25.792 00:31:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2417419 ']' 00:27:25.792 00:31:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.792 00:31:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.792 00:31:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.792 00:31:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.792 00:31:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:25.792 [2024-07-16 00:31:17.082331] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:27:25.792 [2024-07-16 00:31:17.082410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417419 ] 00:27:25.792 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.792 [2024-07-16 00:31:17.142819] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.792 [2024-07-16 00:31:17.247805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:26.050 00:31:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:26.050 [2024-07-16 00:31:17.498918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.050 null0 00:27:26.050 [2024-07-16 00:31:17.530981] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:26.050 [2024-07-16 00:31:17.531473] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:26.050 [2024-07-16 00:31:17.538979] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.050 00:31:17 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.050 00:31:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:26.051 [2024-07-16 00:31:17.546981] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:26.051 request: 00:27:26.051 { 00:27:26.051 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:26.051 "secure_channel": false, 00:27:26.051 "listen_address": { 00:27:26.051 "trtype": "tcp", 00:27:26.051 "traddr": "127.0.0.1", 00:27:26.051 "trsvcid": "4420" 00:27:26.051 }, 00:27:26.051 "method": "nvmf_subsystem_add_listener", 00:27:26.051 "req_id": 1 00:27:26.051 } 00:27:26.051 Got JSON-RPC error response 00:27:26.051 response: 00:27:26.051 { 00:27:26.051 "code": -32602, 00:27:26.051 "message": "Invalid parameters" 00:27:26.051 } 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:26.051 00:31:17 keyring_file -- keyring/file.sh@46 -- # bperfpid=2417427 00:27:26.051 00:31:17 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:26.051 00:31:17 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2417427 /var/tmp/bperf.sock 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2417427 ']' 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:26.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.051 00:31:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:26.051 [2024-07-16 00:31:17.596571] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:27:26.051 [2024-07-16 00:31:17.596655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417427 ] 00:27:26.051 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.051 [2024-07-16 00:31:17.651831] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.309 [2024-07-16 00:31:17.758832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.309 00:31:17 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.309 00:31:17 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:26.309 00:31:17 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:26.309 00:31:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:26.566 00:31:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OZlqVk89Hk 00:27:26.566 00:31:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OZlqVk89Hk 00:27:26.823 00:31:18 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:26.823 00:31:18 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:26.823 00:31:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.823 00:31:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.823 00:31:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:27.080 00:31:18 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Hd6izPLolG == \/\t\m\p\/\t\m\p\.\H\d\6\i\z\P\L\o\l\G ]] 00:27:27.080 00:31:18 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:27.080 00:31:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:27.080 00:31:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:27.080 00:31:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:27.080 00:31:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:27.336 00:31:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OZlqVk89Hk == \/\t\m\p\/\t\m\p\.\O\Z\l\q\V\k\8\9\H\k ]] 00:27:27.336 00:31:18 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:27.336 00:31:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:27.336 00:31:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:27.336 00:31:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:27.336 00:31:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:27.336 00:31:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:27.593 00:31:19 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:27.593 00:31:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:27.593 00:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:27.593 00:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:27.593 00:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:27.593 00:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:27.593 00:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:27.851 00:31:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:27.851 00:31:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:27.851 00:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:28.108 [2024-07-16 00:31:19.592943] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:28.108 nvme0n1 00:27:28.108 00:31:19 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:28.108 00:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:28.108 00:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:28.108 00:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:28.108 00:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:28.108 00:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:28.365 00:31:19 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:28.365 00:31:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:28.365 00:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:28.365 00:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:28.365 00:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:28.365 00:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:28.366 00:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:28.623 00:31:20 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:28.624 00:31:20 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:28.881 Running I/O for 1 seconds... 00:27:29.815 00:27:29.815 Latency(us) 00:27:29.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.815 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:29.815 nvme0n1 : 1.02 4831.62 18.87 0.00 0.00 26193.15 4126.34 35729.26 00:27:29.815 =================================================================================================================== 00:27:29.815 Total : 4831.62 18.87 0.00 0.00 26193.15 4126.34 35729.26 00:27:29.815 0 00:27:29.815 00:31:21 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:29.815 00:31:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:30.073 00:31:21 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:30.073 00:31:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:30.073 00:31:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.073 00:31:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.073 00:31:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.073 00:31:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.332 00:31:21 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:30.332 00:31:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:30.332 00:31:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:30.332 00:31:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.332 00:31:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.332 00:31:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.332 00:31:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.590 00:31:22 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:30.590 00:31:22 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:30.590 00:31:22 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:30.590 00:31:22 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:30.590 00:31:22 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:30.590 00:31:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:30.590 00:31:22 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:30.590 00:31:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:30.590 00:31:22 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:30.590 00:31:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:30.848 [2024-07-16 00:31:22.314695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:30.848 [2024-07-16 00:31:22.315176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96b430 (107): Transport endpoint is not connected 00:27:30.848 [2024-07-16 00:31:22.316167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96b430 (9): Bad file descriptor 00:27:30.848 [2024-07-16 00:31:22.317165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:30.848 [2024-07-16 00:31:22.317199] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:30.848 [2024-07-16 00:31:22.317213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:30.848 request: 00:27:30.848 { 00:27:30.848 "name": "nvme0", 00:27:30.848 "trtype": "tcp", 00:27:30.848 "traddr": "127.0.0.1", 00:27:30.848 "adrfam": "ipv4", 00:27:30.848 "trsvcid": "4420", 00:27:30.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:30.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:30.848 "prchk_reftag": false, 00:27:30.848 "prchk_guard": false, 00:27:30.848 "hdgst": false, 00:27:30.848 "ddgst": false, 00:27:30.848 "psk": "key1", 00:27:30.848 "method": "bdev_nvme_attach_controller", 00:27:30.848 "req_id": 1 00:27:30.848 } 00:27:30.848 Got JSON-RPC error response 00:27:30.848 response: 00:27:30.848 { 00:27:30.848 "code": -5, 00:27:30.848 "message": "Input/output error" 00:27:30.848 } 00:27:30.848 00:31:22 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:30.848 00:31:22 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:30.848 00:31:22 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:30.848 00:31:22 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:30.848 00:31:22 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:30.848 00:31:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:30.848 00:31:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.848 00:31:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.848 00:31:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.848 00:31:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.107 00:31:22 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:31.107 00:31:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:31.107 00:31:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:31.107 00:31:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.107 00:31:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.107 00:31:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.107 00:31:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.397 00:31:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:31.397 00:31:22 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:31.397 00:31:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:31.654 00:31:23 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:31.654 00:31:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:31.912 00:31:23 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:31.912 00:31:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.912 00:31:23 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:32.169 00:31:23 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:32.169 00:31:23 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Hd6izPLolG 00:27:32.169 00:31:23 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:32.169 00:31:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:32.169 00:31:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:32.169 00:31:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:32.169 00:31:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.169 00:31:23 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:32.169 00:31:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.170 00:31:23 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:32.170 00:31:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:32.427 [2024-07-16 00:31:23.822697] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Hd6izPLolG': 0100660 00:27:32.427 [2024-07-16 00:31:23.822737] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:32.427 request: 00:27:32.427 { 00:27:32.427 "name": "key0", 00:27:32.427 "path": "/tmp/tmp.Hd6izPLolG", 00:27:32.427 "method": "keyring_file_add_key", 00:27:32.427 "req_id": 1 00:27:32.427 } 00:27:32.427 Got JSON-RPC error response 00:27:32.427 response: 00:27:32.427 { 00:27:32.427 "code": -1, 00:27:32.427 "message": "Operation not permitted" 00:27:32.427 } 00:27:32.427 00:31:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:32.427 00:31:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:32.427 00:31:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:32.427 00:31:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:32.427 00:31:23 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Hd6izPLolG 00:27:32.427 00:31:23 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:32.427 00:31:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Hd6izPLolG 00:27:32.685 00:31:24 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Hd6izPLolG 00:27:32.685 00:31:24 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:32.685 00:31:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:32.685 00:31:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.685 00:31:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.685 00:31:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:32.685 00:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.943 00:31:24 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:32.943 00:31:24 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:32.943 00:31:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:32.943 00:31:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:32.943 00:31:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:32.943 00:31:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.943 00:31:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:32.943 00:31:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.943 00:31:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:32.943 00:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:33.200 [2024-07-16 00:31:24.560707] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Hd6izPLolG': No such file or directory 00:27:33.200 [2024-07-16 00:31:24.560743] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:33.200 [2024-07-16 00:31:24.560775] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:33.200 [2024-07-16 00:31:24.560790] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:33.200 [2024-07-16 00:31:24.560814] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:33.200 request: 00:27:33.200 { 00:27:33.200 "name": "nvme0", 00:27:33.200 "trtype": "tcp", 00:27:33.200 "traddr": "127.0.0.1", 00:27:33.200 "adrfam": "ipv4", 00:27:33.200 "trsvcid": "4420", 00:27:33.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.200 "prchk_reftag": false, 00:27:33.200 "prchk_guard": false, 00:27:33.200 "hdgst": false, 00:27:33.200 "ddgst": false, 00:27:33.200 "psk": "key0", 00:27:33.200 "method": "bdev_nvme_attach_controller", 00:27:33.200 "req_id": 1 00:27:33.200 } 00:27:33.200 Got JSON-RPC error response 00:27:33.200 response: 00:27:33.200 { 00:27:33.200 "code": -19, 00:27:33.200 "message": "No such device" 00:27:33.200 } 00:27:33.200 00:31:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:33.200 00:31:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:33.200 00:31:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:33.200 00:31:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:33.200 00:31:24 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:33.200 00:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:33.458 00:31:24 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t46qbzFmAk 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:33.458 00:31:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:33.458 00:31:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.458 00:31:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:33.458 00:31:24 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:33.458 00:31:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:33.458 00:31:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t46qbzFmAk 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t46qbzFmAk 00:27:33.458 00:31:24 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.t46qbzFmAk 00:27:33.458 00:31:24 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t46qbzFmAk 00:27:33.458 00:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t46qbzFmAk 00:27:33.715 00:31:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:33.715 00:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:33.973 nvme0n1 00:27:33.973 00:31:25 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:33.973 00:31:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:33.973 00:31:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.973 00:31:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.973 00:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.973 00:31:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.229 00:31:25 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:34.229 00:31:25 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:34.229 00:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:34.486 00:31:25 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:34.486 00:31:25 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:34.486 00:31:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.486 00:31:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.486 00:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.744 00:31:26 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:34.744 00:31:26 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:34.744 00:31:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:34.744 00:31:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.744 00:31:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.744 00:31:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.744 00:31:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.001 00:31:26 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:35.001 00:31:26 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:35.001 00:31:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:35.258 00:31:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:35.258 00:31:26 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:35.258 00:31:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.515 00:31:26 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:35.515 00:31:26 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t46qbzFmAk 00:27:35.515 00:31:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t46qbzFmAk 00:27:35.772 00:31:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OZlqVk89Hk 00:27:35.772 00:31:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OZlqVk89Hk 00:27:36.029 00:31:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.029 00:31:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.287 nvme0n1 00:27:36.287 00:31:27 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:36.287 00:31:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:36.544 00:31:28 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:36.544 "subsystems": [ 00:27:36.544 { 00:27:36.544 "subsystem": "keyring", 00:27:36.544 "config": [ 00:27:36.544 { 00:27:36.544 "method": "keyring_file_add_key", 00:27:36.544 "params": { 00:27:36.544 "name": "key0", 00:27:36.544 "path": "/tmp/tmp.t46qbzFmAk" 00:27:36.544 } 00:27:36.544 }, 00:27:36.544 { 00:27:36.544 "method": "keyring_file_add_key", 00:27:36.544 "params": { 00:27:36.544 "name": "key1", 00:27:36.544 "path": "/tmp/tmp.OZlqVk89Hk" 00:27:36.544 } 00:27:36.544 } 00:27:36.544 ] 00:27:36.544 }, 00:27:36.544 { 00:27:36.544 "subsystem": "iobuf", 00:27:36.544 "config": [ 00:27:36.544 { 00:27:36.544 "method": "iobuf_set_options", 00:27:36.544 "params": { 00:27:36.544 "small_pool_count": 8192, 00:27:36.544 "large_pool_count": 1024, 00:27:36.544 "small_bufsize": 8192, 00:27:36.544 "large_bufsize": 135168 00:27:36.544 } 00:27:36.544 } 00:27:36.544 ] 00:27:36.544 }, 00:27:36.544 { 00:27:36.544 "subsystem": "sock", 00:27:36.544 "config": [ 00:27:36.544 { 00:27:36.544 "method": "sock_set_default_impl", 00:27:36.544 "params": { 00:27:36.544 "impl_name": "posix" 00:27:36.544 } 00:27:36.544 }, 00:27:36.544 { 00:27:36.544 "method": "sock_impl_set_options", 00:27:36.544 "params": { 00:27:36.544 "impl_name": "ssl", 00:27:36.544 "recv_buf_size": 4096, 00:27:36.544 "send_buf_size": 4096, 00:27:36.544 "enable_recv_pipe": true, 00:27:36.544 "enable_quickack": false, 00:27:36.544 "enable_placement_id": 0, 00:27:36.544 "enable_zerocopy_send_server": true, 00:27:36.544 "enable_zerocopy_send_client": false, 00:27:36.544 "zerocopy_threshold": 0, 00:27:36.544 "tls_version": 0, 00:27:36.544 "enable_ktls": false 00:27:36.544 } 00:27:36.544 }, 00:27:36.544 { 00:27:36.544 "method": "sock_impl_set_options", 00:27:36.544 "params": { 00:27:36.544 "impl_name": "posix", 00:27:36.544 "recv_buf_size": 2097152, 00:27:36.544 "send_buf_size": 2097152, 00:27:36.544 "enable_recv_pipe": true, 00:27:36.544 "enable_quickack": false, 00:27:36.544 "enable_placement_id": 0, 00:27:36.544 "enable_zerocopy_send_server": true, 00:27:36.544 "enable_zerocopy_send_client": false, 00:27:36.544 "zerocopy_threshold": 0, 00:27:36.544 "tls_version": 0, 00:27:36.544 "enable_ktls": false 00:27:36.544 } 00:27:36.544 } 00:27:36.544 ] 00:27:36.544 }, 00:27:36.544 { 00:27:36.544 "subsystem": "vmd", 00:27:36.544 "config": [] 00:27:36.544 }, 00:27:36.544 { 00:27:36.544 "subsystem": "accel", 00:27:36.544 "config": [ 00:27:36.544 { 00:27:36.544 "method": "accel_set_options", 00:27:36.544 "params": { 00:27:36.544 "small_cache_size": 128, 00:27:36.545 "large_cache_size": 16, 00:27:36.545 "task_count": 2048, 00:27:36.545 "sequence_count": 2048, 00:27:36.545 "buf_count": 2048 00:27:36.545 } 00:27:36.545 } 00:27:36.545 ] 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "subsystem": "bdev", 00:27:36.545 "config": [ 00:27:36.545 { 00:27:36.545 "method": "bdev_set_options", 00:27:36.545 "params": { 00:27:36.545 "bdev_io_pool_size": 65535, 00:27:36.545 "bdev_io_cache_size": 256, 00:27:36.545 "bdev_auto_examine": true, 00:27:36.545 "iobuf_small_cache_size": 128, 00:27:36.545 "iobuf_large_cache_size": 16 00:27:36.545 } 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "method": "bdev_raid_set_options", 00:27:36.545 "params": { 00:27:36.545 "process_window_size_kb": 1024 00:27:36.545 } 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "method": "bdev_iscsi_set_options", 00:27:36.545 "params": { 00:27:36.545 "timeout_sec": 30 00:27:36.545 } 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "method": "bdev_nvme_set_options", 00:27:36.545 "params": { 00:27:36.545 "action_on_timeout": "none", 00:27:36.545 "timeout_us": 0, 00:27:36.545 "timeout_admin_us": 0, 00:27:36.545 "keep_alive_timeout_ms": 10000, 00:27:36.545 "arbitration_burst": 0, 00:27:36.545 "low_priority_weight": 0, 00:27:36.545 "medium_priority_weight": 0, 00:27:36.545 "high_priority_weight": 0, 00:27:36.545 "nvme_adminq_poll_period_us": 10000, 00:27:36.545 "nvme_ioq_poll_period_us": 0, 00:27:36.545 "io_queue_requests": 512, 00:27:36.545 "delay_cmd_submit": true, 00:27:36.545 "transport_retry_count": 4, 00:27:36.545 "bdev_retry_count": 3, 00:27:36.545 "transport_ack_timeout": 0, 00:27:36.545 "ctrlr_loss_timeout_sec": 0, 00:27:36.545 "reconnect_delay_sec": 0, 00:27:36.545 "fast_io_fail_timeout_sec": 0, 00:27:36.545 "disable_auto_failback": false, 00:27:36.545 "generate_uuids": false, 00:27:36.545 "transport_tos": 0, 00:27:36.545 "nvme_error_stat": false, 00:27:36.545 "rdma_srq_size": 0, 00:27:36.545 "io_path_stat": false, 00:27:36.545 "allow_accel_sequence": false, 00:27:36.545 "rdma_max_cq_size": 0, 00:27:36.545 "rdma_cm_event_timeout_ms": 0, 00:27:36.545 "dhchap_digests": [ 00:27:36.545 "sha256", 00:27:36.545 "sha384", 00:27:36.545 "sha512" 00:27:36.545 ], 00:27:36.545 "dhchap_dhgroups": [ 00:27:36.545 "null", 00:27:36.545 "ffdhe2048", 00:27:36.545 "ffdhe3072", 00:27:36.545 "ffdhe4096", 00:27:36.545 "ffdhe6144", 00:27:36.545 "ffdhe8192" 00:27:36.545 ] 00:27:36.545 } 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "method": "bdev_nvme_attach_controller", 00:27:36.545 "params": { 00:27:36.545 "name": "nvme0", 00:27:36.545 "trtype": "TCP", 00:27:36.545 "adrfam": "IPv4", 00:27:36.545 "traddr": "127.0.0.1", 00:27:36.545 "trsvcid": "4420", 00:27:36.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.545 "prchk_reftag": false, 00:27:36.545 "prchk_guard": false, 00:27:36.545 "ctrlr_loss_timeout_sec": 0, 00:27:36.545 "reconnect_delay_sec": 0, 00:27:36.545 "fast_io_fail_timeout_sec": 0, 00:27:36.545 "psk": "key0", 00:27:36.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.545 "hdgst": false, 00:27:36.545 "ddgst": false 00:27:36.545 } 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "method": "bdev_nvme_set_hotplug", 00:27:36.545 "params": { 00:27:36.545 "period_us": 100000, 00:27:36.545 "enable": false 00:27:36.545 } 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "method": "bdev_wait_for_examine" 00:27:36.545 } 00:27:36.545 ] 00:27:36.545 }, 00:27:36.545 { 00:27:36.545 "subsystem": "nbd", 00:27:36.545 "config": [] 00:27:36.545 } 00:27:36.545 ] 00:27:36.545 }' 00:27:36.545 00:31:28 keyring_file -- keyring/file.sh@114 -- # killprocess 2417427 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2417427 ']' 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2417427 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2417427 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2417427' 00:27:36.545 killing process with pid 2417427 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@967 -- # kill 2417427 00:27:36.545 Received shutdown signal, test time was about 1.000000 seconds 00:27:36.545 00:27:36.545 Latency(us) 00:27:36.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.545 =================================================================================================================== 00:27:36.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.545 00:31:28 keyring_file -- common/autotest_common.sh@972 -- # wait 2417427 00:27:36.803 00:31:28 keyring_file -- keyring/file.sh@117 -- # bperfpid=2418885 00:27:36.803 00:31:28 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2418885 /var/tmp/bperf.sock 00:27:36.803 00:31:28 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2418885 ']' 00:27:36.803 00:31:28 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:36.803 00:31:28 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:36.803 00:31:28 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.803 00:31:28 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:36.803 "subsystems": [ 00:27:36.803 { 00:27:36.803 "subsystem": "keyring", 00:27:36.803 "config": [ 00:27:36.803 { 00:27:36.803 "method": "keyring_file_add_key", 00:27:36.803 "params": { 00:27:36.803 "name": "key0", 00:27:36.803 "path": "/tmp/tmp.t46qbzFmAk" 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "keyring_file_add_key", 00:27:36.803 "params": { 00:27:36.803 "name": "key1", 00:27:36.803 "path": "/tmp/tmp.OZlqVk89Hk" 00:27:36.803 } 00:27:36.803 } 00:27:36.803 ] 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "subsystem": "iobuf", 00:27:36.803 "config": [ 00:27:36.803 { 00:27:36.803 "method": "iobuf_set_options", 00:27:36.803 "params": { 00:27:36.803 "small_pool_count": 8192, 00:27:36.803 "large_pool_count": 1024, 00:27:36.803 "small_bufsize": 8192, 00:27:36.803 "large_bufsize": 135168 00:27:36.803 } 00:27:36.803 } 00:27:36.803 ] 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "subsystem": "sock", 00:27:36.803 "config": [ 00:27:36.803 { 00:27:36.803 "method": "sock_set_default_impl", 00:27:36.803 "params": { 00:27:36.803 "impl_name": "posix" 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "sock_impl_set_options", 00:27:36.803 "params": { 00:27:36.803 "impl_name": "ssl", 00:27:36.803 "recv_buf_size": 4096, 00:27:36.803 "send_buf_size": 4096, 00:27:36.803 "enable_recv_pipe": true, 00:27:36.803 "enable_quickack": false, 00:27:36.803 "enable_placement_id": 0, 00:27:36.803 "enable_zerocopy_send_server": true, 00:27:36.803 "enable_zerocopy_send_client": false, 00:27:36.803 "zerocopy_threshold": 0, 00:27:36.803 "tls_version": 0, 00:27:36.803 "enable_ktls": false 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "sock_impl_set_options", 00:27:36.803 "params": { 00:27:36.803 "impl_name": "posix", 00:27:36.803 "recv_buf_size": 2097152, 00:27:36.803 "send_buf_size": 2097152, 00:27:36.803 "enable_recv_pipe": true, 00:27:36.803 "enable_quickack": false, 00:27:36.803 "enable_placement_id": 0, 00:27:36.803 "enable_zerocopy_send_server": true, 00:27:36.803 "enable_zerocopy_send_client": false, 00:27:36.803 "zerocopy_threshold": 0, 00:27:36.803 "tls_version": 0, 00:27:36.803 "enable_ktls": false 00:27:36.803 } 00:27:36.803 } 00:27:36.803 ] 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "subsystem": "vmd", 00:27:36.803 "config": [] 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "subsystem": "accel", 00:27:36.803 "config": [ 00:27:36.803 { 00:27:36.803 "method": "accel_set_options", 00:27:36.803 "params": { 00:27:36.803 "small_cache_size": 128, 00:27:36.803 "large_cache_size": 16, 00:27:36.803 "task_count": 2048, 00:27:36.803 "sequence_count": 2048, 00:27:36.803 "buf_count": 2048 00:27:36.803 } 00:27:36.803 } 00:27:36.803 ] 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "subsystem": "bdev", 00:27:36.803 "config": [ 00:27:36.803 { 00:27:36.803 "method": "bdev_set_options", 00:27:36.803 "params": { 00:27:36.803 "bdev_io_pool_size": 65535, 00:27:36.803 "bdev_io_cache_size": 256, 00:27:36.803 "bdev_auto_examine": true, 00:27:36.803 "iobuf_small_cache_size": 128, 00:27:36.803 "iobuf_large_cache_size": 16 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "bdev_raid_set_options", 00:27:36.803 "params": { 00:27:36.803 "process_window_size_kb": 1024 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "bdev_iscsi_set_options", 00:27:36.803 "params": { 00:27:36.803 "timeout_sec": 30 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "bdev_nvme_set_options", 00:27:36.803 "params": { 00:27:36.803 "action_on_timeout": "none", 00:27:36.803 "timeout_us": 0, 00:27:36.803 "timeout_admin_us": 0, 00:27:36.803 "keep_alive_timeout_ms": 10000, 00:27:36.803 "arbitration_burst": 0, 00:27:36.803 "low_priority_weight": 0, 00:27:36.803 "medium_priority_weight": 0, 00:27:36.803 "high_priority_weight": 0, 00:27:36.803 "nvme_adminq_poll_period_us": 10000, 00:27:36.803 "nvme_ioq_poll_period_us": 0, 00:27:36.803 "io_queue_requests": 512, 00:27:36.803 "delay_cmd_submit": true, 00:27:36.803 "transport_retry_count": 4, 00:27:36.803 "bdev_retry_count": 3, 00:27:36.803 "transport_ack_timeout": 0, 00:27:36.803 "ctrlr_loss_timeout_sec": 0, 00:27:36.803 "reconnect_delay_sec": 0, 00:27:36.803 "fast_io_fail_timeout_sec": 0, 00:27:36.803 "disable_auto_failback": false, 00:27:36.803 "generate_uuids": false, 00:27:36.803 "transport_tos": 0, 00:27:36.803 "nvme_error_stat": false, 00:27:36.803 "rdma_srq_size": 0, 00:27:36.803 "io_path_stat": false, 00:27:36.803 "allow_accel_sequence": false, 00:27:36.803 "rdma_max_cq_size": 0, 00:27:36.803 "rdma_cm_event_timeout_ms": 0, 00:27:36.803 "dhchap_digests": [ 00:27:36.803 "sha256", 00:27:36.803 "sha384", 00:27:36.803 "sha512" 00:27:36.803 ], 00:27:36.803 "dhchap_dhgroups": [ 00:27:36.803 "null", 00:27:36.803 "ffdhe2048", 00:27:36.803 "ffdhe3072", 00:27:36.803 "ffdhe4096", 00:27:36.803 "ffdhe6144", 00:27:36.803 "ffdhe8192" 00:27:36.803 ] 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "bdev_nvme_attach_controller", 00:27:36.803 "params": { 00:27:36.803 "name": "nvme0", 00:27:36.803 "trtype": "TCP", 00:27:36.803 "adrfam": "IPv4", 00:27:36.803 "traddr": "127.0.0.1", 00:27:36.803 "trsvcid": "4420", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.803 "prchk_reftag": false, 00:27:36.803 "prchk_guard": false, 00:27:36.803 "ctrlr_loss_timeout_sec": 0, 00:27:36.803 "reconnect_delay_sec": 0, 00:27:36.803 "fast_io_fail_timeout_sec": 0, 00:27:36.803 "psk": "key0", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.803 "hdgst": false, 00:27:36.803 "ddgst": false 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "bdev_nvme_set_hotplug", 00:27:36.803 "params": { 00:27:36.803 "period_us": 100000, 00:27:36.803 "enable": false 00:27:36.803 } 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "method": "bdev_wait_for_examine" 00:27:36.803 } 00:27:36.803 ] 00:27:36.803 }, 00:27:36.803 { 00:27:36.803 "subsystem": "nbd", 00:27:36.803 "config": [] 00:27:36.803 } 00:27:36.803 ] 00:27:36.803 }' 00:27:36.803 00:31:28 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:36.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:36.803 00:31:28 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.804 00:31:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:36.804 [2024-07-16 00:31:28.394608] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:27:36.804 [2024-07-16 00:31:28.394697] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418885 ] 00:27:37.061 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.061 [2024-07-16 00:31:28.456069] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.061 [2024-07-16 00:31:28.577836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.319 [2024-07-16 00:31:28.762549] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:37.883 00:31:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.883 00:31:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:37.883 00:31:29 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:37.883 00:31:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.883 00:31:29 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:38.140 00:31:29 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:38.140 00:31:29 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:38.140 00:31:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:38.140 00:31:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.140 00:31:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.140 00:31:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.140 00:31:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.398 00:31:29 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:38.398 00:31:29 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:38.398 00:31:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:38.398 00:31:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.398 00:31:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.398 00:31:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:38.398 00:31:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.655 00:31:30 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:38.655 00:31:30 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:38.655 00:31:30 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:38.655 00:31:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:38.912 00:31:30 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:38.912 00:31:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:38.912 00:31:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.t46qbzFmAk /tmp/tmp.OZlqVk89Hk 00:27:38.912 00:31:30 keyring_file -- keyring/file.sh@20 -- # killprocess 2418885 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2418885 ']' 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2418885 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418885 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418885' 00:27:38.912 killing process with pid 2418885 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@967 -- # kill 2418885 00:27:38.912 Received shutdown signal, test time was about 1.000000 seconds 00:27:38.912 00:27:38.912 Latency(us) 00:27:38.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.912 =================================================================================================================== 00:27:38.912 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:38.912 00:31:30 keyring_file -- common/autotest_common.sh@972 -- # wait 2418885 00:27:39.170 00:31:30 keyring_file -- keyring/file.sh@21 -- # killprocess 2417419 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2417419 ']' 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2417419 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2417419 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2417419' 00:27:39.170 killing process with pid 2417419 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@967 -- # kill 2417419 00:27:39.170 [2024-07-16 00:31:30.717412] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:39.170 00:31:30 keyring_file -- common/autotest_common.sh@972 -- # wait 2417419 00:27:39.734 00:27:39.734 real 0m14.268s 00:27:39.734 user 0m35.036s 00:27:39.734 sys 0m3.285s 00:27:39.734 00:31:31 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.734 00:31:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 ************************************ 00:27:39.734 END TEST keyring_file 00:27:39.734 ************************************ 00:27:39.734 00:31:31 -- common/autotest_common.sh@1142 -- # return 0 00:27:39.734 00:31:31 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:39.734 00:31:31 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:39.734 00:31:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:39.734 00:31:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.734 00:31:31 -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 ************************************ 00:27:39.734 START TEST keyring_linux 00:27:39.734 ************************************ 00:27:39.734 00:31:31 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:39.734 * Looking for test storage... 00:27:39.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:39.734 00:31:31 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:39.734 00:31:31 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.734 00:31:31 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.734 00:31:31 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.734 00:31:31 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.734 00:31:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.734 00:31:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.734 00:31:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.734 00:31:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:39.734 00:31:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.734 00:31:31 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.734 00:31:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:39.734 00:31:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:39.734 00:31:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:39.735 /tmp/:spdk-test:key0 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:39.735 00:31:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:39.735 00:31:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:39.735 /tmp/:spdk-test:key1 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2419250 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:39.735 00:31:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2419250 00:27:39.735 00:31:31 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2419250 ']' 00:27:39.735 00:31:31 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.735 00:31:31 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.735 00:31:31 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.735 00:31:31 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.735 00:31:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:39.993 [2024-07-16 00:31:31.375840] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:27:39.993 [2024-07-16 00:31:31.375953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419250 ] 00:27:39.993 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.993 [2024-07-16 00:31:31.433122] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.993 [2024-07-16 00:31:31.541688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:40.251 00:31:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:40.251 [2024-07-16 00:31:31.806342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.251 null0 00:27:40.251 [2024-07-16 00:31:31.838367] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:40.251 [2024-07-16 00:31:31.838844] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.251 00:31:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:40.251 981893503 00:27:40.251 00:31:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:40.251 269044071 00:27:40.251 00:31:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2419385 00:27:40.251 00:31:31 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:40.251 00:31:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2419385 /var/tmp/bperf.sock 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2419385 ']' 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:40.251 00:31:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:40.509 [2024-07-16 00:31:31.903320] Starting SPDK v24.09-pre git sha1 47ca8c1aa / DPDK 24.03.0 initialization... 00:27:40.509 [2024-07-16 00:31:31.903414] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419385 ] 00:27:40.509 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.509 [2024-07-16 00:31:31.964163] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.509 [2024-07-16 00:31:32.080701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.509 00:31:32 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.509 00:31:32 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:40.509 00:31:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:40.509 00:31:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:40.766 00:31:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:40.766 00:31:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:41.332 00:31:32 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:41.332 00:31:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:41.332 [2024-07-16 00:31:32.917499] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:41.590 nvme0n1 00:27:41.590 00:31:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:41.590 00:31:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:41.590 00:31:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:41.590 00:31:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:41.590 00:31:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:41.590 00:31:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.847 00:31:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:41.847 00:31:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:41.847 00:31:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:41.847 00:31:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:41.847 00:31:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.847 00:31:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.847 00:31:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:42.103 00:31:33 keyring_linux -- keyring/linux.sh@25 -- # sn=981893503 00:27:42.103 00:31:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:42.103 00:31:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:42.103 00:31:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 981893503 == \9\8\1\8\9\3\5\0\3 ]] 00:27:42.104 00:31:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 981893503 00:27:42.104 00:31:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:42.104 00:31:33 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.104 Running I/O for 1 seconds... 00:27:43.035 00:27:43.035 Latency(us) 00:27:43.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:43.035 nvme0n1 : 1.02 3521.86 13.76 0.00 0.00 35944.81 8446.86 46797.56 00:27:43.035 =================================================================================================================== 00:27:43.035 Total : 3521.86 13.76 0.00 0.00 35944.81 8446.86 46797.56 00:27:43.035 0 00:27:43.035 00:31:34 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:43.035 00:31:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:43.293 00:31:34 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:43.293 00:31:34 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:43.293 00:31:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:43.293 00:31:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:43.293 00:31:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:43.293 00:31:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.550 00:31:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:43.550 00:31:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:43.550 00:31:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:43.550 00:31:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:43.550 00:31:35 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:43.550 00:31:35 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:43.550 00:31:35 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:43.550 00:31:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.550 00:31:35 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:43.550 00:31:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.550 00:31:35 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:43.550 00:31:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:43.814 [2024-07-16 00:31:35.389633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:43.814 [2024-07-16 00:31:35.390277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b432b0 (107): Transport endpoint is not connected 00:27:43.814 [2024-07-16 00:31:35.391268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b432b0 (9): Bad file descriptor 00:27:43.814 [2024-07-16 00:31:35.392266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:43.814 [2024-07-16 00:31:35.392287] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:43.814 [2024-07-16 00:31:35.392301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:43.814 request: 00:27:43.814 { 00:27:43.814 "name": "nvme0", 00:27:43.814 "trtype": "tcp", 00:27:43.814 "traddr": "127.0.0.1", 00:27:43.814 "adrfam": "ipv4", 00:27:43.814 "trsvcid": "4420", 00:27:43.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.814 "prchk_reftag": false, 00:27:43.814 "prchk_guard": false, 00:27:43.814 "hdgst": false, 00:27:43.814 "ddgst": false, 00:27:43.814 "psk": ":spdk-test:key1", 00:27:43.814 "method": "bdev_nvme_attach_controller", 00:27:43.814 "req_id": 1 00:27:43.814 } 00:27:43.814 Got JSON-RPC error response 00:27:43.814 response: 00:27:43.814 { 00:27:43.814 "code": -5, 00:27:43.814 "message": "Input/output error" 00:27:43.814 } 00:27:43.814 00:31:35 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:43.814 00:31:35 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:43.814 00:31:35 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:43.814 00:31:35 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@33 -- # sn=981893503 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 981893503 00:27:43.814 1 links removed 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:43.814 00:31:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:44.124 00:31:35 keyring_linux -- keyring/linux.sh@33 -- # sn=269044071 00:27:44.124 00:31:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 269044071 00:27:44.124 1 links removed 00:27:44.124 00:31:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2419385 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2419385 ']' 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2419385 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419385 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419385' 00:27:44.124 killing process with pid 2419385 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@967 -- # kill 2419385 00:27:44.124 Received shutdown signal, test time was about 1.000000 seconds 00:27:44.124 00:27:44.124 Latency(us) 00:27:44.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.124 =================================================================================================================== 00:27:44.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.124 00:31:35 keyring_linux -- common/autotest_common.sh@972 -- # wait 2419385 00:27:44.382 00:31:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2419250 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2419250 ']' 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2419250 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419250 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419250' 00:27:44.382 killing process with pid 2419250 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@967 -- # kill 2419250 00:27:44.382 00:31:35 keyring_linux -- common/autotest_common.sh@972 -- # wait 2419250 00:27:44.640 00:27:44.640 real 0m5.042s 00:27:44.641 user 0m9.319s 00:27:44.641 sys 0m1.493s 00:27:44.641 00:31:36 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.641 00:31:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:44.641 ************************************ 00:27:44.641 END TEST keyring_linux 00:27:44.641 ************************************ 00:27:44.641 00:31:36 -- common/autotest_common.sh@1142 -- # return 0 00:27:44.641 00:31:36 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:44.641 00:31:36 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:44.641 00:31:36 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:44.898 00:31:36 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:44.898 00:31:36 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:44.898 00:31:36 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:44.898 00:31:36 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:44.898 00:31:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.898 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.898 00:31:36 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:44.898 00:31:36 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:44.898 00:31:36 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:44.898 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:27:46.794 INFO: APP EXITING 00:27:46.795 INFO: killing all VMs 00:27:46.795 INFO: killing vhost app 00:27:46.795 INFO: EXIT DONE 00:27:47.727 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:47.727 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:47.727 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:47.727 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:47.727 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:47.727 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:47.727 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:47.727 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:47.727 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:47.727 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:47.727 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:47.727 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:47.727 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:47.727 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:47.727 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:47.727 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:47.727 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:49.100 Cleaning 00:27:49.100 Removing: /var/run/dpdk/spdk0/config 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:49.100 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:49.100 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:49.100 Removing: /var/run/dpdk/spdk1/config 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:49.100 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:49.100 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:49.100 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:49.100 Removing: /var/run/dpdk/spdk2/config 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:49.100 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:49.100 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:49.100 Removing: /var/run/dpdk/spdk3/config 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:49.101 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:49.101 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:49.101 Removing: /var/run/dpdk/spdk4/config 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:49.101 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:49.101 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:49.101 Removing: /dev/shm/bdev_svc_trace.1 00:27:49.101 Removing: /dev/shm/nvmf_trace.0 00:27:49.101 Removing: /dev/shm/spdk_tgt_trace.pid2157718 00:27:49.101 Removing: /var/run/dpdk/spdk0 00:27:49.101 Removing: /var/run/dpdk/spdk1 00:27:49.101 Removing: /var/run/dpdk/spdk2 00:27:49.101 Removing: /var/run/dpdk/spdk3 00:27:49.101 Removing: /var/run/dpdk/spdk4 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2156107 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2156841 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2157718 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2158095 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2158786 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2158926 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2159644 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2159769 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2160013 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2161211 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2162249 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2162566 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2162754 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2162960 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2163152 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2163323 00:27:49.101 Removing: /var/run/dpdk/spdk_pid2163589 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2163769 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2164001 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2166424 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2166616 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2166812 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2166898 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2167227 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2167348 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2167773 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2167794 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2168069 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2168208 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2168378 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2168396 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2168878 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2169032 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2169224 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2169400 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2169546 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2169613 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2169809 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2170046 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2170199 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2170413 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2170636 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2170793 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2170973 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2171228 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2171381 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2171601 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2171887 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2172086 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2172364 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2172522 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2172734 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2173390 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2173622 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2173789 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2174056 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2174220 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2174410 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2174741 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2176675 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2203117 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2205742 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2213345 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2216641 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2218992 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2219453 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2223372 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2227225 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2227325 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2227887 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2228545 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2229127 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2229518 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2229605 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2229750 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2229879 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2229886 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2230538 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2231090 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2231742 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2232139 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2232155 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2232404 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2233320 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2234143 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2239386 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2239654 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2242296 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2246618 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2248679 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2255058 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2260265 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2261578 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2262247 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2272575 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2274792 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2300252 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2303262 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2304834 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2306142 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2306283 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2306420 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2306554 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2307004 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2308320 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2309040 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2309364 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2311088 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2311523 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2312080 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2314573 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2320503 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2323156 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2327047 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2327990 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2328992 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2331628 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2333974 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2338817 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2338819 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2341589 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2341730 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2341869 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2342253 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2342258 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2345015 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2345344 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2347919 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2349877 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2353296 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2356739 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2363094 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2367446 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2367448 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2380533 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2380943 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2381343 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2381753 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2382336 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2382746 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2383275 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2383687 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2386305 00:27:49.358 Removing: /var/run/dpdk/spdk_pid2386449 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2390239 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2390416 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2392022 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2397065 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2397071 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2399975 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2401378 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2402785 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2403755 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2405685 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2406554 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2411956 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2412348 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2412742 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2414198 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2414578 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2414980 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2417419 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2417427 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2418885 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2419250 00:27:49.616 Removing: /var/run/dpdk/spdk_pid2419385 00:27:49.616 Clean 00:27:49.616 00:31:41 -- common/autotest_common.sh@1451 -- # return 0 00:27:49.616 00:31:41 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:49.616 00:31:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:49.616 00:31:41 -- common/autotest_common.sh@10 -- # set +x 00:27:49.616 00:31:41 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:49.616 00:31:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:49.616 00:31:41 -- common/autotest_common.sh@10 -- # set +x 00:27:49.616 00:31:41 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:49.616 00:31:41 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:49.616 00:31:41 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:49.616 00:31:41 -- spdk/autotest.sh@391 -- # hash lcov 00:27:49.616 00:31:41 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:49.616 00:31:41 -- spdk/autotest.sh@393 -- # hostname 00:27:49.616 00:31:41 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:49.874 geninfo: WARNING: invalid characters removed from testname! 00:28:21.930 00:32:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:21.930 00:32:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:25.208 00:32:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:27.819 00:32:19 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:31.994 00:32:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:35.273 00:32:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:37.802 00:32:29 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:37.802 00:32:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.802 00:32:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:37.802 00:32:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.802 00:32:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.802 00:32:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.802 00:32:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.802 00:32:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.802 00:32:29 -- paths/export.sh@5 -- $ export PATH 00:28:37.802 00:32:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.802 00:32:29 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:37.802 00:32:29 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:37.802 00:32:29 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721082749.XXXXXX 00:28:37.802 00:32:29 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082749.VNToAg 00:28:37.802 00:32:29 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:37.802 00:32:29 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:37.802 00:32:29 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:37.802 00:32:29 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:37.802 00:32:29 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:37.802 00:32:29 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:37.803 00:32:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:37.803 00:32:29 -- common/autotest_common.sh@10 -- $ set +x 00:28:37.803 00:32:29 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:37.803 00:32:29 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:37.803 00:32:29 -- pm/common@17 -- $ local monitor 00:28:37.803 00:32:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:37.803 00:32:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:37.803 00:32:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:37.803 00:32:29 -- pm/common@21 -- $ date +%s 00:28:37.803 00:32:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:37.803 00:32:29 -- pm/common@21 -- $ date +%s 00:28:37.803 00:32:29 -- pm/common@25 -- $ sleep 1 00:28:37.803 00:32:29 -- pm/common@21 -- $ date +%s 00:28:37.803 00:32:29 -- pm/common@21 -- $ date +%s 00:28:37.803 00:32:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082749 00:28:37.803 00:32:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082749 00:28:37.803 00:32:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082749 00:28:37.803 00:32:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082749 00:28:37.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082749_collect-vmstat.pm.log 00:28:37.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082749_collect-cpu-load.pm.log 00:28:37.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082749_collect-cpu-temp.pm.log 00:28:37.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082749_collect-bmc-pm.bmc.pm.log 00:28:38.741 00:32:30 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:38.741 00:32:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:38.741 00:32:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:38.741 00:32:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:38.741 00:32:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:38.741 00:32:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:38.741 00:32:30 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:38.741 00:32:30 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:38.741 00:32:30 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:38.741 00:32:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:38.741 00:32:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:38.741 00:32:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:38.741 00:32:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:38.741 00:32:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.741 00:32:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:38.741 00:32:30 -- pm/common@44 -- $ pid=2429089 00:28:38.741 00:32:30 -- pm/common@50 -- $ kill -TERM 2429089 00:28:38.741 00:32:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.741 00:32:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:38.741 00:32:30 -- pm/common@44 -- $ pid=2429091 00:28:38.741 00:32:30 -- pm/common@50 -- $ kill -TERM 2429091 00:28:38.741 00:32:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.741 00:32:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:38.741 00:32:30 -- pm/common@44 -- $ pid=2429093 00:28:38.741 00:32:30 -- pm/common@50 -- $ kill -TERM 2429093 00:28:38.741 00:32:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.741 00:32:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:38.741 00:32:30 -- pm/common@44 -- $ pid=2429123 00:28:38.741 00:32:30 -- pm/common@50 -- $ sudo -E kill -TERM 2429123 00:28:38.741 + [[ -n 2072466 ]] 00:28:38.741 + sudo kill 2072466 00:28:38.751 [Pipeline] } 00:28:38.770 [Pipeline] // stage 00:28:38.775 [Pipeline] } 00:28:38.797 [Pipeline] // timeout 00:28:38.801 [Pipeline] } 00:28:38.814 [Pipeline] // catchError 00:28:38.819 [Pipeline] } 00:28:38.834 [Pipeline] // wrap 00:28:38.839 [Pipeline] } 00:28:38.853 [Pipeline] // catchError 00:28:38.862 [Pipeline] stage 00:28:38.864 [Pipeline] { (Epilogue) 00:28:38.881 [Pipeline] catchError 00:28:38.883 [Pipeline] { 00:28:38.899 [Pipeline] echo 00:28:38.901 Cleanup processes 00:28:38.907 [Pipeline] sh 00:28:39.191 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.191 2429243 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:39.191 2429353 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.205 [Pipeline] sh 00:28:39.483 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.483 ++ grep -v 'sudo pgrep' 00:28:39.483 ++ awk '{print $1}' 00:28:39.483 + sudo kill -9 2429243 00:28:39.495 [Pipeline] sh 00:28:39.777 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:47.908 [Pipeline] sh 00:28:48.186 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:48.186 Artifacts sizes are good 00:28:48.200 [Pipeline] archiveArtifacts 00:28:48.207 Archiving artifacts 00:28:48.393 [Pipeline] sh 00:28:48.673 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:48.688 [Pipeline] cleanWs 00:28:48.700 [WS-CLEANUP] Deleting project workspace... 00:28:48.700 [WS-CLEANUP] Deferred wipeout is used... 00:28:48.707 [WS-CLEANUP] done 00:28:48.708 [Pipeline] } 00:28:48.729 [Pipeline] // catchError 00:28:48.740 [Pipeline] sh 00:28:49.032 + logger -p user.info -t JENKINS-CI 00:28:49.041 [Pipeline] } 00:28:49.058 [Pipeline] // stage 00:28:49.064 [Pipeline] } 00:28:49.084 [Pipeline] // node 00:28:49.091 [Pipeline] End of Pipeline 00:28:49.126 Finished: SUCCESS